[1] ACHIAM J, ADLER S, AGARWAL S, et al. GPT-4 technical report[EB/OL]. [2024-07-25]. https://arxiv.org/abs/2303.08774.
[2] TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMA: open and efficient foundation language models[EB/OL]. [2024-07-25]. https://arxiv.org/abs/2302.13971.
[3] HAN Z, GAO C, LIU J, et al. Parameter-efficient fine-tuning for large models: a comprehensive survey[EB/OL]. [2024-07-25]. https://arxiv.org/abs/2403.14608.
[4] JI Z, LEE N, FRIESKE R, et al. Survey of hallucination in natural language generation[J]. ACM Computing Surveys, 2023, 55(12): 1-38.
[5] DANILEVSKY M, QIAN K, AHARONOV R, et al. A survey of the state of explainable AI for natural language processing[C]//Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, Suzhou, Dec 4-7, 2020. Stroudsburg: ACL, 2020: 447-459.
[6] PAN S, LUO L, WANG Y, et al. Unifying large language models and knowledge graphs: a roadmap[J]. IEEE Transactions on Knowledge and Data Engineering, 2024, 36(7): 3580-3599.
[7] SHU Y, YU Z, LI Y, et al. TIARA: multi-grained retrieval for robust question answering over large knowledge base[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2022: 8108-8121.
[8] GU Y, DENG X, SU Y. Dont generate, discriminate: a proposal for grounding language models to real-world environments[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 4928-4949.
[9] ZHANG Z, HAN X, LIU Z, et al. ERNIE: enhanced language representation with informative entities[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguisticss. Stroudsburg: ACL, 2019: 1441-1451.
[10] SUN Y, WANG S, FENG S, et al. ERNIE 3.0: large-scale knowledge enhanced pre-training for language understanding and generation[EB/OL]. [2024-07-25]. https://arxiv.org/abs/2107.02137.
[11] WANG R, TANG D, DUAN N, et al. K-Adapter: infusing knowledge into pre-trained models with adapters[C]//Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Stroudsburg: ACL, 2021: 1405-1418.
[12] WANG X, GAO T, ZHU Z, et al. KEPLER: a unified model for knowledge embedding and pre-trained language representation[J]. Transactions of the Association for Computational Linguistics, 2021, 9: 176-194.
[13] YE X, YAVUZ S, HASHIMOTO K, et al. RNG-KBQA: generation augmented iterative ranking for knowledge base question answering[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 6032-6043.
[14] JIANG J, ZHOU K, DONG Z, et al. StructGPT: a general framework for large language model to reason over structured data[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2023: 9237-9251.
[15] LI X, ZHAO R, CHIA Y K, et al. Chain-of-knowledge: grounding large language models via dynamic knowledge adapting over heterogeneous sources[C]//Proceedings of the 11th International Conference on Learning Representations, Kigali, May 1-5, 2023.
[16] SUN J, XU C, TANG L, et al. Think-on-graph: deep and responsible reasoning of large language model with knowledge graph[EB/OL]. [2024-07-25]. https://arxiv.org/abs/2307. 07697.
[17] LUO L, LI Y F, HAFFARI G, et al. Reasoning on graphs: faithful and interpretable large language model reasoning[EB/OL]. [2024-07-25]. https://arxiv.org/abs/2310.01061.
[18] MILLER J J. Graph database applications and concepts with Neo4j[C]//Proceedings of the Southern Association for Information Systems Conference. Atlanta: AIS Electronic Library, 2013: 141-147.
[19] BAI J, BAI S, CHU Y, et al. Qwen technical report[EB/OL]. [2024-07-25]. https://arxiv.org/abs/2309.16609.
[20] GUO A, LI X, XIAO G, et al. Spcql: a semantic parsing dataset for converting natural language into cypher[C]//Proceedings of the 31st ACM International Conference on Information & Knowledge Management. New York: ACM, 2022: 3973-3977.
[21] HU E J, SHEN Y, WALLIS P, et al. LoRA: low-rank adaptation of large language models[EB/OL]. [2024-07-25]. https://arxiv.org/abs/2106.09685.
[22] BROWN T B. Language models are few-shot learners[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates, 2020.
[23] TALMOR A, BERANT J. The Web as a knowledge-base for answering complex questions[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2018: 641-651.
[24] YIH W, RICHARDSON M, MEEK C, et al. The value of semantic parse labeling for knowledge base question answering[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2016: 201-206.
[25] GU Y, KASE S, VANNI M, et al. Beyond I.I.D.: three levels of generalization for question answering on knowledege bases[C]//Proceedings of the Web Conference 2021, Ljubljana, Apr 19-23, 2021. New York: ACM, 2021: 3477-3488.
[26] BOLLACKER K, EVANS C, PARITOSH P, et al. Freebase: a collaboratively created graph database for structuring human knowledge[C]//Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, Vancouver, Jun 9-12, 2008. New York: ACM, 2008: 1247-1250.
[27] YU D, ZHANG S, NG P, et al. DecAF: joint decoding of answers and logical forms for question answering over knowledge bases[C]//Proceedings of the 11th International Conference on Learning Representations, Kigali, May 1-5, 2023.
[28] WEI J, WANG X, SCHUURMANS D, et al. Chain-of-thought prompting elicits reasoning in large language models[C]//Advances in Neural Information Processing Systems 35, New Orleans, Nov 28-Dec 9, 2022: 24824-24837.
[29] DAS R, ZAHEER M, THAI D, et al. Case-based reasoning for natural language queries over knowledge bases[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2021: 9594-9611.
[30] LI T, MA X, ZHUANG A, et al. Few-shot in-context learning on knowledge base question answering[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 6966-6980. |