[1] ZHANG S, PAN L M, ZHAO J Z, et al. The knowledge align-ment problem: bridging human and external knowledge for large language models[EB/OL]. (2023-05-23) [2024-08-22]. https://arxiv.org/pdf/2305.13669.pdf.
[2] 胡泳. 当机器人产生幻觉,它告诉我们关于人类思维的什么?[J]. 文化艺术研究, 2023, 16(3): 15-26.
HU Y. When robots hallucinate: what does it tell us about human thinking?[J]. Studies of Culture and Art, 2023, 16(3): 15-26.
[3] 莫祖英, 盘大清, 刘欢, 等. 信息质量视角下AIGC虚假信息问题及根源分析[J]. 图书情报知识, 2023, 40(4): 32-40.
MO Z Y, PAN D Q, LIU H, et al. Analysis on AIGC false information problem and root cause from the perspective of information quality[J]. Documentation, Information & Knowledge, 2023, 40(4): 32-40.
[4] 张欣. 面向产业链的治理: 人工智能生成内容的技术机理与治理逻辑[J]. 行政法学研究, 2023(6): 43-60.
ZHANG X. Industry chain-oriented governance: technological mechanisms and governance logic in the management of artificial intelligence generated content[J]. Administrative Law Review, 2023(6): 43-60.
[5] 陈建兵, 王明. 负责任的人工智能: 技术伦理危机下AIGC的治理基点[J]. 西安交通大学学报(社会科学版), 2024, 44(1): 111-120.
CHEN J B, WANG M. Responsible artificial intelligence: governance fundamentals for AIGC in the ethical crisis of technology[J]. Journal of Xi’an Jiaotong University (Social Sciences), 2024, 44(1): 111-120.
[6] 王禄生. ChatGPT类技术: 法律人工智能的改进者还是颠覆者?[J]. 政法论坛, 2023, 41(4): 49-62.
WANG L S. ChatGPT-like technology: improver or disruptor of legal AI?[J]. Tribune of Political Science and Law, 2023, 41(4): 49-62.
[7] 漆晨航. 生成式人工智能的虚假信息风险特征及其治理路径[J]. 情报理论与实践, 2024, 47(3): 112-120.
QI C H. Research on the risks of disinformation from generative artificial intelligence and its governance paths[J]. Information Studies (Theory & Application), 2024, 47(3): 112-120.
[8] 胡泳. 人工智能驱动的虚假信息: 现在与未来[J]. 南京社会科学, 2024(1): 96-109.
HU Y. AI-driven disinformation: present and future[J]. Nanjing Journal of Social Sciences, 2024(1): 96-109.
[9] WENDLAND K. Demystifying artificial consciousness-about attributions, black swans, and suffering machines[J]. Journal of AI Humanities, 2021, 9: 137-166.
[10] LOEB G E. Remembrance of things perceived: adding thalamocortical function to artificial neural networks[J]. Frontiers in Integrative Neuroscience, 2023, 17: 1108271.
[11] ZHANG Y, LI Y F, CUI L Y, et al. Siren’s song in the AI ocean: a survey on hallucination in large language models[EB/OL]. (2023-09-03)[2024-08-22].?https://arxiv.org/pdf/2309.01219.pdf.
[12] YE H B, LIU T, ZHANG A J, et al. Cognitive mirage: a review of hallucinations in large language models[EB/OL]. [2024-08-22]. https://arxiv.org/abs/2309.06794.
[13] BAWDEN R, YVON F. Investigating the translation performance of a large multilingual language model: the case of bloom[C]//Proceedings of the 24th Annual Conference of the European Association for Machine Translation. Stroudsburg: ACL, 2023: 157-170.
[14] PAL A, UMAPATHI L K, SANKARASUBBU M. Med-HALT: medical domain hallucination test for large language models[C]//Proceedings of the 27th Conference on Computational Natural Language Learning. Stroudsburg: ACL, 2023: 314-330.
[15] BANG Y J, CAHYAWIJAYA S, LEE N, et al. A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity[C]//Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 675-718.
[16] LIN S, HILTON J, EVANS O. TruthfulQA: measuring how models mimic human falsehoods[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 3214-3252.
[17] DZIRI N, KAMALLOO E, MILTON S, et al. FaithDial: a faithful benchmark for information-seeking dialogue[J]. Tran-sactions of the Association for Computational Linguistics, 2022, 10: 1473-1490.
[18] DEVANNY J, DYLAN H, GROSSFELD E. Generative AI and intelligence assessment[J]. The RUSI Journal, 2023, 168(7): 16-25.
[19] PAN L M, SAXON M, XU W D, et al. Automatically correcting large language models: surveying the landscape of diverse self-correction strategies[EB/OL]. (2023-08-06) [2024-08-22].?https://arxiv.org/pdf/2308.03188.pdf.
[20] WELLER O, MARONE M, WEIR N, et al. “According to ...” prompting language models improves quoting from pre-training data[C]//Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 2288-2301.
[21] GAUR V, SAUNSHI N. Reasoning in large language models through symbolic math word problems[C]//Findings of the Association for Computational Linguistics: ACL 2023. Stroudsburg: ACL, 2023: 5889-5903.
[22] DALE D, VOITA E, BARRAULT L, et al. Detecting and mitigating hallucinations in machine translation: model internal workings alone do well, sentence similarity even better[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 36-50.
[23] 岳颀, 张晨康. 多模态场景下AIGC的应用综述[J]. 计算机科学与探索, 2025, 19(1): 79-96.
YUE Q, ZHANG C K. Survey on applications of AIGC in multimodal scenarios[J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(1): 79-96.
[24] 张钦彤, 王昱超, 王鹤羲, 等. 大语言模型微调技术的研究综述[J]. 计算机工程与应用, 2024, 60(17): 17-33.
ZHANG Q T, WANG Y C, WANG H X, et al. Comprehensive review of large language model fine-tuning[J]. Computer Engineering and Applications, 2024, 60(17): 17-33. |