Journal of Frontiers of Computer Science and Technology ›› 2024, Vol. 18 ›› Issue (1): 58-74.DOI: 10.3778/j.issn.1673-9418.2304022
• Frontiers·Surveys • Previous Articles Next Articles
XU Biqi, MA Zhiqiang, ZHOU Yutong, JIA Wenchao, LIU Jia, LYU Kai
Online:
2024-01-01
Published:
2024-01-01
许璧麒,马志强,周钰童,贾文超,刘佳,吕凯
XU Biqi, MA Zhiqiang, ZHOU Yutong, JIA Wenchao, LIU Jia, LYU Kai. Survey of Research on Knowledge-Driven Dialogue Generation Models[J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(1): 58-74.
许璧麒, 马志强, 周钰童, 贾文超, 刘佳, 吕凯. 知识驱动的对话生成模型研究综述[J]. 计算机科学与探索, 2024, 18(1): 58-74.
Add to citation manager EndNote|Ris|BibTeX
URL: http://fcst.ceaj.org/EN/10.3778/j.issn.1673-9418.2304022
[1] TURING A M. Computing machinery and intelligence[J]. Mind, 1950, 59(236): 433-460. [2] ARON J. How innovative is apple??s new voice assistant, Siri?[J]. New Scientist, 2011, 212(2836): 24. [3] QIU M, LI F L, WANG S, et al. Alime chat: a sequence to sequence and rerank based chatbot engine[C]//Proceedings of the 55th Annual Meeting of the Association for Com-putational Linguistics. Stroudsburg: ACL, 2017: 498-503. [4] ZHOU L, GAO J, LI D, et al. The design and implementa-tion of xiaoice, an empathetic social chatbot[J]. Computat-ional Linguistics, 2020, 46(1): 53-93. [5] WEIZENBAUM J. ELIZA—a computer program for the study of natural language communication between man and ma-chine[J]. Communications of the ACM, 1983, 26(1): 23-28. [6] SUTSKEVER I, VINYALS O, LE Q V. Sequence to sequence learning with neural networks[C]//Advances in Neural In-formation Processing Systems 27, Montreal, Dec 8-13, 2014: 3104-3112. [7] LOWE R, POW N, SERBAN I, et al. Incorporating unstruc-tured textual knowledge sources into neural dialogue sys-tems[C]//Neural Information Processing Systems Work-shop on Machine Learning for Spoken Language Unders-tanding, Montreal, Dec 7-12, 2015. [8] DODGE J, GANE A, ZHANG X, et al. Evaluating prere-quisite qualities for learning end-to-end dialog systems[J]. arXiv:1511.06931, 2015. [9] MAHDISOLTANI F, BIEGA J, SUCHANEK F. Yago3: a knowledge base from multilingual wikipedias[C]//Procee-dings of the 7th Biennial Conference on Innovative Data Systems Research, Asilomar, Jan 4-7, 2015. [10] VRANDE?I? D, KR?TZSCH M. Wikidata: a free collabo-rative knowledgebase[J]. Communications of the ACM, 2014, 57(10): 78-85. [11] BOLLACKER K, COOK R, TUFTS P. Freebase: a shared database of structured general human knowledge[C]//Pro-ceedings of the 22nd AAAI Conference on Artificial Intel-ligence, Vancouver, Jul 22-26, 2007: 1962-1963. [12] BIZER C, LEHMANN J, KOBILAROV G, et al. DBpedia—a crystallization point for the web of data[J]. Journal of Web Semantics, 2009, 7(3): 154-165. [13] SPEER R, HAVASI C. ConceptNet 5: a large semantic net-work for relational knowledge[M]//The People’s Web Meets NLP: Collaboratively Constructed Language Resources.Berlin, Heidelberg: Springer, 2013: 161-176. [14] SWARTZ A. Musicbrainz: a semantic web service[J]. IEEE Intelligent Systems, 2002, 17(1): 76-77. [15] AHLERS D. Assessment of the accuracy of GeoNames gazetteer data[C]//Proceedings of the 7th Workshop on Geo-graphic Information Retrieval, Beijing, May 15-16, 2013: 74-81. [16] WISHART D S, KNOX C, GUO A C, et al. DrugBank: a comprehensive resource for in silico drug discovery and exploration[J]. Nucleic Acids Research, 2006, 34: D668-D672. [17] WANG X, GAO T, ZHU Z, et al. KEPLER: a unified model for knowledge embedding and pre-trained language repre-sentation[J]. Transactions of the Association for Computat-ional Linguistics, 2021, 9: 176-194. [18] HAO J, CHEN M, YU W, et al. Universal representation learning of knowledge bases by jointly embedding ins-tances and ontological concepts[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Anchorage, Aug 4-8, 2019: 1709-1719. [19] CAMBRIA E, SONG Y, WANG H, et al. Semantic multi-dimensional scaling for open-domain sentiment analysis[J]. IEEE Intelligent Systems, 2012, 29(2): 44-51. [20] XU Z, LIU B, WANG B, et al. Incorporating loose-struc-tured knowledge into conversation modeling via recall-gate LSTM[C]//Proceedings of the 2017 International Joint Con-ference on Neural Networks, Alaska, May 14-18, 2017: 3506-3513. [21] ZHU W, MO K, ZHANG Y, et al. Flexible end-to-end dialo-gue system for knowledge grounded conversation[J]. arXiv:1709.04264, 2017. [22] ZHOU H, YOUNG T, HUANG M, et al. Commonsense knowledge aware conversation generation with graph atten-tion[C]//Proceedings of the 27th International Joint Con-ference on Artificial Intelligence, Stockholm, Jul 13-19, 2018: 4623-4629. [23] LIU Z, NIU Z Y, WU H, et al. Knowledge aware conversa-tion generation with explainable reasoning over augmented graphs[J]. arXiv:1903.10245, 2019. [24] ZHOU K, PRABHUMOYE S, BLACK A W. A dataset for document grounded conversations[J]. arXiv:1809.07358, 2018. [25] LI Z, NIU C, MENG F, et al. Incremental transformer with deliberation decoder for document grounded conversations[J]. arXiv:1907.08854, 2019. [26] FENG S, WAN H, GUNASEKARA C, et al. doc2dial: a goal-oriented document-grounded dialogue dataset[J]. arXiv:2011.06623, 2020. [27] DINAN E, ROLLER S, SHUSTER K, et al. Wizard of Wikipedia: knowledge-powered conversational agents[J]. arXiv:1811.01241, 2018. [28] ZHOU K, PRABHUMOYE S, BLACK A W. A dataset for document grounded conversations[J]. arXiv:1809.07358, 2018. [29] MOGHE N, ARORA S, BANERJEE S, et al. Towards exploiting background knowledge for building conversa-tion systems[J]. arXiv:1809.08205, 2018. [30] MAJUMDER B P, LI S, NI J, et al. Interview: large-scale modeling of media dialog with discourse patterns and know-ledge grounding[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Punta Cana, Nov 11-12, 2020: 8129-8141. [31] RODRIGUEZ P, CROOK P, MOON S, et al. Information seeking in the spirit of learning: a dataset for conversational curiosity[J]. arXiv:2005.00172, 2020. [32] ZHOU H, ZHENG C, HUANG K, et al. KdConv: a Chinese multi-domain dialogue dataset towards multi-turn know-ledge-driven conversation[J]. arXiv:2004.04100, 2020. [33] FAN A, JERNITE Y, PEREZ E, et al. ELI5: long form question answering[J]. arXiv:1907.09190, 2019. [34] SHANG L F, LU Z D, LI H. Neural responding machine for short-text conversation[C]//Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2015: 1577-1586. [35] ERIC M, CHARTIER N, HEDAYATNIA B, et al. Multi-sentence knowledge selection in open-domain dialogue[C]//Proceedings of the 14th International Conference on Natural Language Generation, Aberdeen, Sep 20-24, 2021: 76-86. [36] ZHANG H, LIU Z, XIONG C, et al. Grounded conversa-tion generation as guided traverses in commonsense know-ledge graphs[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Strouds-burg: ACL, 2020: 2031-2043. [37] GOPALAKRISHNAN K, HEDAYATNIA B, CHEN Q, et al. Topical-chat: towards knowledge-grounded open-domain conversations[C]//Proceedings of the 20th Annual Conference of the International Speech Communication Association, Graz, Sep 15-19, 2019: 1891-1895. [38] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J]. OpenAI Blog, 2019, 1(8): 9. [39] BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Advances in Neural Information Processing Systems 33, Dec 6-12, 2020: 1877-1901. [40] OUYANG L, WU J, JIANG X, et al. Training language models to follow instructions with human feedback[C]//Advances in Neural Information Processing Systems 35, 2022: 27730-27744. [41] SCAO T L, FAN A, AKIKI C, et al. Bloom: a 176b-parameter open-access multilingual language model[J]. arXiv:2211.05100, 2022. [42] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language under-standing[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Jun 2-7, 2019: 4171-4186. [43] LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly op-timized BERT pretraining approach[J]. arXiv:1907.11692,2019. [44] RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. The Journal of Machine Learning Research, 2020, 21(1): 5485-5551. [45] SONG K, TAN X, QIN T, et al. MASS: masked sequence to sequence pre-training for language generation[C]//Pro-ceedings of the 2019 International Conference on Machine Learning, Long Beach, Jun 10-15, 2019: 5926-5936. [46] LEWIS M, LIU Y, GOYAL N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Lin-guistics, Seattle, Jul 5-10, 2020: 7871-7880. [47] ACHILLE A, ROVERE M, SOATTO S. Critical learning periods in deep neural networks[J]. arXiv:1711.08856, 2017. [48] LIU L Z, WANG Y, KASAI J, et al. Probing across time: what does RoBERTa know and when?[J]. arXiv:2104.07885, 2021. [49] SAPHRA N, LOPEZ A. Understanding learning dynamics of language models with SVCCA[C]//Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Florence, Jul 28-31, 2019: 3257-3267. [50] SAPHRA N, LOPEZ A. LSTMs compose-and learn-bottom-up[C]//Findings of the Association for Computational Linguistics: EMNLP 2020, Hong Kong, China, Nov 16-20, 2020: 2797-2809. [51] GRAVES A, GRAVES A. Long short-term memory[J]. Neural Computation, 2010, 9(8): 1735-1780. [52] RAGHU M, GILMER J, YOSINSKI J, et al. SVCCA: singular vector canonical correlation analysis for deep learning dynamics and interpretability[C]//Advances in Neural Infor-mation Processing Systems 30, Long Beach, Dec 4-9, 2017: 6076-6085. [53] CHIANG C H, HUANG S F, LEE H Y. Pretrained language model embryology: the birth of ALBERT[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China, Nov 16-20, 2020: 6813-6828. [54] LAN Z, CHEN M, GOODMAN S, et al. ALBERT: a lite BERT for self-supervised learning of language represen-tations[J]. arXiv:1909.11942, 2019. [55] PéREZ-MAYOS L, BALLESTEROS M, WANNER L. How much pretraining data do language models need to learn syntax?[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Putnta Cana, Nov 7-11, 2021: 1571-1582. [56] LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly opti-mized BERT pretraining approach[J]. arXiv:1907.11692, 2019. [57] LIU Z, WANG Y, KASAI J, et al. Probing across time: what does RoBERTa know and when?[C]//Findings of the Association for Computational Linguistics: EMNLP 2021, Punta Cana, Nov 7-11, 2021: 820-842. [58] SUN Y, WANG S, LI Y, et al. Ernie: enhanced represen-tation through knowledge integration[J]. arXiv:1904.09223, 2019. [59] SHEN T, MAO Y, HE P, et al. Exploiting structured know-ledge in text via graph-guided representation learning[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China, Nov 16-20, 2020: 8980-8994. [60] XIONG W, DU J, WANG W Y, et al. Pretrained encyclope-dia: weakly supervised knowledge-pretrained language model[J]. arXiv:1912.09637, 2019. [61] YAMADA I, ASAI A, SHINDO H, et al. LUKE: deep con-textualized entity representations with entity-aware self-attention[C]//Proceedings of the 2020 Conference on Empi-rical Methods in Natural Language Processing, Hong Kong, China, Nov 16-20, 2020: 6442-6454. [62] FéVRY T, SOARES L B, FITZGERALD N, et al. Entities as experts: sparse memory access with entity supervision[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China, Nov 16-20, 2020: 4937-4951. [63] LOGESWARAN L, CHANG M W, LEE K, et al. Zero-shot entity linking by reading entity descriptions[C]//Procee-dings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Jul 28-31, 2019: 3449-3460. [64] GILLICK D, KULKARNI S, LANSING L, et al. Learning dense representations for entity retrieval[C]//Proceedings of the 23rd Conference on Computational Natural Language Learning, Hong Kong, China, Nov 3-4, 2019: 528-537. [65] PETERS M E, NEUMANN M, LOGAN R, et al. Know-ledge enhanced contextual word representations[C]//Procee-dings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Hong Kong, China, Nov 3-7, 2019: 43-54. [66] ZHANG Z, HAN X, LIU Z, et al. ERNIE: enhanced lan-guage representation with informative entities[C]//Procee-dings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Jul 28-31, 2019: 1441-1451. [67] WANG X, GAO T, ZHU Z, et al. KEPLER: a unified model for knowledge embedding and pre-trained language repre-sentation[J]. Transactions of the Association for Computa-tional Linguistics, 2021, 9: 176-194. [68] WANG R, TANG D, DUAN N, et al. K-Adapter: infusing knowledge into pre-trained models with adapters[C]//Fin-dings of the Association for Computational Linguistics: ACL-IJCNLP 2021, Bangkok, Aug 7, 2021: 1405-1418. [69] QIN Y, LIN Y, TAKANOBU R, et al. ERICA: improving entity and relation understanding for pre-trained language models via contrastive learning[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Bangkok, Aug 7, 2021: 3350-3363. [70] LIU W, ZHOU P, ZHAO Z, et al. K-BERT: enabling language representation with knowledge graph[C]//Proceedings of the 2020 AAAI Conference on Artificial Intelligence, New York, Feb 7-12, 2020: 2901-2908. [71] SOARES L B, FITZGERALD N, LING J, et al. Matching the blanks: distributional similarity for relation learning[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Jul 28-31, 2019: 2895-2905. [72] BOSSELUT A, RASHKIN H, SAP M, et al. COMET: com-monsense transformers for automatic knowledge graph construction[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Jul 28-31, 2019: 4762-4779. [73] GUAN J, HUANG F, ZHAO Z, et al. A knowledge-enhanced pretraining model for commonsense story genera-tion[J]. Transactions of the Association for Computational Linguistics, 2020, 8: 93-108. [74] SHWARTZ V, WEST P, LE BRAS R, et al. Unsupervised commonsense question answering with self-talk[C]//Procee-dings of the 2020 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China, Nov 16-20, 2020: 4615-4629. [75] MA K, ILIEVSKI F, FRANCIS J, et al. Knowledge-driven data construction for zero-shot evaluation in commonsense question answering[C]//Proceedings of the 2021 AAAI Con-ference on Artificial Intelligence. Menlo Park: AAAI, 2021:13507-13515. [76] KE P, JI H, LIU S, et al. SentiLARE: sentiment-aware language representation learning with linguistic knowledge[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China, Nov 16-20, 2020: 6975-6988. [77] TIAN H, GAO C, XIAO X, et al. SKEP: sentiment know-ledge enhanced pre-training for sentiment analysis[C]//Pro-ceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seattle, Jul 5-10, 2020: 4067-4076. [78] LAUSCHER A, VULI? I, PONTI E M, et al. Specializing unsupervised pretraining models for word-level semantic similarity[C]//Proceedings of the 28th International Confe-rence on Computational Linguistics, Barcelona, Dec 8-13, 2020: 1371-1383. [79] MILLER G A. WordNet: a lexical database for English[J]. Communications of the ACM, 1995, 38(11): 39-41. [80] NAVIGLI R, PONZETTO S P. BabelNet: building a very large multilingual semantic network[C]//Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, Uppsala, Jul 11-16, 2010: 216-225. [81] SONG J, LIANG D, LI R, et al. Improving semantic mat-ching through dependency-enhanced pre-trained model with adaptive fusion[J]. arXiv:2210.08471, 2022. [82] SACHAN D, ZHANG Y, QI P, et al. Do syntax trees help pre-trained transformers extract information?[C]//Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, Apr 19-23, 2021: 2647-2661. [83] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, Long Beach, Dec 4-9, 2017: 5998-6008. [84] BAI J, WANG Y, CHEN Y, et al. Syntax-BERT: improving pre-trained transformers with syntax trees[C]//Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, Apr 19-23, 2021: 3011-3020. [85] DAI D, DONG L, HAO Y, et al. Knowledge neurons in pre-trained transformers[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, May 22-27, 2022: 8493-8502. [86] GEVA M, SCHUSTER R, BERANT J, et al. Transformer feed-forward layers are key-value memories[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Punta Cana, Nov 7-11, 2021: 5484-5495. [87] MENG K, BAU D, ANDONIAN A, et al. Locating and edi-ting factual knowledge in GPT[J]. arXiv:2202.05262, 2022. [88] CLARK K, KHANDELWAL U, LEVY O, et al. What does BERT look at? An analysis of BERT’s attention[C]//Pro-ceedings of the 2019 ACL Workshop BlackboxNLP: Analy-zing and Interpreting Neural Networks for NLP, Florence, Jul 28-31, 2019: 276-286. [89] HTUT P M, PHANG J, BORDIA S, et al. Do attention heads in BERT track syntactic dependencies?[J]. arXiv:1911.12246, 2019. [90] LIN Y, TAN Y C, FRANK R. Open sesame: getting inside BERT’s linguistic knowledge[C]//Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Florence, Jul 28-31, 2019: 241-253. [91] LIU N F, GARDNER M, BELINKOV Y, et al. Linguistic knowledge and transferability of contextual representations[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Jun 2-7, 2019: 1073-1094. [92] WALLAT J, SINGH J, ANAND A. BERTnesia: inves-tigating the capture and forgetting of knowledge in BERT[C]//Proceedings of the 3rd BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. Stroud-sburg: ACL, 2020: 174-183. [93] PETRONI F, ROCKT?SCHEL T, RIEDEL S, et al. Language models as knowledge bases?[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Hong Kong, China, Nov 3-7, 2019: 2463-2473. [94] JUNEJA J, AGARWAL R. Finding patterns in knowledge attribution for transformers[J]. arXiv:2205.01366, 2022. [95] LIAN R, XIE M, WANG F, et al. Learning to select know-ledge for response generation in dialog systems[C]//Procee-dings of the 28th International Joint Conference on Arti-ficial Intelligence, Macao, Aug 10-16, 2019: 5081-5087. [96] ZHANG Y, REN P, DE RIJKE M. Improving background based conversation with context-aware knowledge pre-selec-tion[J]. arXiv:1906.06685, 2019. [97] DINAN E, ROLLER S, SHUSTER K, et al. Wizard of Wiki-pedia: knowledge-powered conversational agents[J]. arXiv:1811.01241, 2018. [98] LIN X, JIAN W, HE J, et al. Generating informative conver-sational response using recurrent knowledge-interaction and knowledge-copy[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seattle, Jul 5-10, 2020: 41-52. [99] KIM B, AHN J, KIM G. Sequential latent knowledge selec-tion for knowledge-grounded dialogue[J]. arXiv:2002.07510, 2020. [100] ZHENG C, CAO Y, JIANG D, et al. Difference-aware know-ledge selection for knowledge-grounded conversation gene-ration[J]. arXiv:2009.09378, 2020. [101] ZHAN H, ZHANG H, CHEN H, et al. Augmenting know-ledge-grounded conversations with sequential knowledge transition[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Compu-tational Linguistics: Human Language Technologies. Stroud-sburg: ACL, 2021: 5621-5630. [102] BENGIO Y, DUCHARME R, VINCENT P, et al. A neural probabilistic language model[J]. The Journal of Machine Learning Research, 2003, 3: 1137-1155. [103] PAPINENI K, ROUKOS S, WARD T, et al. Bleu: a method for automatic evaluation of machine translation[C]//Pro-ceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Grenoble, Apr 8-9, 2002: 311-318. [104] LIN C Y. Rouge: a package for automatic evaluation of summaries[C]//Text Summarization Branches Out. Stroud-sburg: ACL, 2004: 74-81. [105] LI J, GALLEY M, BROCKETT C, et al. A diversity-promoting objective function for neural conversation models[J]. arXiv:1510.03055, 2015. [106] SEE A, LIU P J, MANNING C D. Get to the point: sum-marization with pointer-generator networks[C]//Proceedings of the 55th Annual Meeting of the Association for Compu-tational Linguistics, Vancouver, Jul 30-Aug 4, 2017: 1073-1083. [107] ZHOU H, YOUNG T, HUANG M, et al. Commonsense knowledge aware conversation generation with graph attention[C]//Proceedings of the 2018 International Joint Conference on Artificial Intelligence. Stroudsburg: ACL, 2018: 4623-4629. [108] JUNG J, SON B, LYU S. Attnio: knowledge graph explo-ration with in-and-out attention flow for knowledge-grounded dialogue[C]//Proceedings of the 2020 Confe-rence on Empirical Methods in Natural Language Proces-sing, Hong Kong, China, Nov 16-20, 2020: 3484-3497. [109] XU J, WANG H, NIU Z Y, et al. Conversational graph grounded policy learning for open-domain conversation generation[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroud-sburg: ACL, 2020: 1835-1845. [110] MOON S, SHAH P, KUMAR A, et al. OpenDialKG: explainable conversational reasoning with attention-based walks over knowledge graphs[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 845-854. |
[1] | GU Yuying, GAO Meifeng. Aspect-Level Sentiment Analysis Combining Part-of-Speech and External Knowledge [J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(10): 2488-2498. |
[2] | TIAN Xin, JI Yi, GAO Haiyan, LIN Xin, LIU Chunping. Scene Graph Generation Method Based on External Information Guidance and Residual Scrambling [J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(10): 1958-1968. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||
/D:/magtech/JO/Jwk3_kxyts/WEB-INF/classes/