[1] LIU Q, CHENG G, GUNARATNA K, et al. Entity summarization: state of the art and future challenges[J]. Journal of Web Semantics, 2021, 69: 100647.
[2] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Jun 2-7, 2019. Stroudsburg: ACL, 2009: 4171-4186.
[3] EKGREN A, GYLLENSTEN A C, GOGOULOU E, et al. Lessons learned from GPT-SW3: building the first large-scale generative language model for Swedish[C]//Proceedings of the 13th Language Resources and Evaluation Conference, Marseille, Jun 20-25, 2022: 3509-3518.
[4] NONG Q, SUN T, GONG S, et al. Maximize a monotone function with a generic submodularity ratio[C]//Proceedings of the 13th Algorithmic Aspects in Information and Management, Beijing, Aug 6-8, 2019. Cham: Springer, 2019: 249-260.
[5] MIRZASOLEIMAN B, BADANIDIYURU A, KARBASI A, et al. Lazier than lazy greedy[C]//Proceedings of the 29th AAAI Conference on Artificial Intelligence, Austin, Jan 25-30, 2015. Menlo Park: AAAI, 2015: 1812-1818.
[6] CHEBOLU P, MELSTED P. PageRank and the random surfer model[C]//Proceedings of the 19th Annual Symposium on Discrete Algorithms, San Francisco, Jan 20-22, 2008: 1010-1018.
[7] CHENG G, TRAN T, QU Y. RELIN: relatedness and informativeness-based centrality for entity summarization[C]//Proceedings of the 10th International Semantic Web Conference, Bonn, Oct 23-27, 2011. Berlin, Heidelberg: Springer, 2011: 114-129.
[8] GUNARATNA K, THIRUNARAYAN K, SHETH A. FACES: diversity-aware entity summarization using incremental hierarchical conceptual clustering[C]//Proceedings of the 29th AAAI Conference on Artificial Intelligence, Austin, Jan 25-30, 2015. Menlo Park: AAAI, 2015: 116-122.
[9] BECK H W, ANWAR T, NAVATHE S B. A conceptual clustering algorithm for database schema design[J]. IEEE Transactions on Knowledge and Data Engineering, 1994, 6(3): 396-411.
[10] THALHAMMER A, LASIERRA N, RETTINGER A. LinkSUM: using link analysis to summarize entity data[C]//Proceedings?of?the?16th?International?Conference?on?Web?Engineering, Lugano, Jun 6-9, 2016. Cham: Springer, 2016: 244-261.
[11] LIU Q, CHENG G, GUNARATNA K, et al. ESBM: an entity summarization benchmark[C]//Proceedings of the 17th International Conference on Extended Semantic Web Conference, Heraklion, May 31-Jun 4, 2020. Cham: Springer, 2020: 548-564.
[12] WEI D, LIU Y, ZHU F, et al. ESA: entity summarization with attention[EB/OL]. [2023-03-05]. https://arxiv.org/abs/1905.10625.
[13] LIU Q, CHENG G, QU Y. DeepLens: deep learning for entity summarization[C]//Proceedings of the 2020 Workshop on Deep Learning for Knowledge Graphs Co-located with the 17th Extended Semantic Web Conference, Heraklion, Jun 2, 2020: 2635.
[14] WEI D, LIU Y, ZHU F, et al. AutoSUM: automating feature extraction and multi-user preference simulation for entity summarization[C]//Proceedings of the 24th Pacific-Asia Conf-erence on Knowledge Discovery and Data Mining, May 11-14, 2020. Cham: Springer, 2020: 580-592.
[15] LIU Q, CHEN Y, CHENG G, et al. Entity summarization with user feedback[C]//Proceedings of the 17th International Conference on Extended Semantic Web Conference, Heraklion, May 31-Jun 4, 2020. Cham: Springer, 2020: 376-392.
[16] FIRMANSYAH A F, MOUSSALLEM D, NGOMO A C N. GATES: using graph attention networks for entity summarization[C]//Proceedings of the 11th Knowledge Capture Conference, Dec 2-3, 2021. New York: ACM, 2021: 73-80.
[17] LIN H, BILMES J. Multi-document summarization via budgeted maximization of submodular functions[C]//Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Los Angeles, Jun 2-4, 2010. Stroudsburg: ACL, 2010: 912-920.
[18] LIN H, BILMES J. A class of submodular functions for document summarization[C]//Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, Jun 19-24, 2011.Stroudsburg: ACL, 2011: 510-520.
[19] JAYANTH J, SUNDARARAJ J, BHATTACHARYYA P. Monotone submodularity in opinion summaries[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Sep 17-21, 2015. Str-oudsburg: ACL, 2015: 169-178.
[20] WANG Y, WANG L, LI Y, et al. A theoretical analysis of NDCG type ranking measures[C]//Proceedings of the 26th Annual Conference on Learning Theory, Princeton, Jun 12-14, 2013: 25-54.
[21] RIVES A, MEIER J, SERCU T, et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences[J]. Proceedings of the National Academy of Sciences of the United States of America, 2021, 118(15): e2016239118.
[22] LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[EB/OL]. [2023-03-05].https://arxiv.org/abs/1907.11692.
[23] YANG Z, DAI Z, YANG Y, et al. XLNet: generalized auto-regressive pretraining for language understanding[C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, Dec 8-14, 2019. New York: ACM, 2019: 5753-5763. |