[1] 孙道萃. 人工智能辅助定罪的进展、理论与应用[J]. 华南师范大学学报(社会科学版), 2024(2): 117-137.
SUN D C. The practice retrospect, theoretical destiny and operation image of conviction assisted by AI[J]. Journal of South China Normal University (Social Science Edition), 2024(2): 117-137.
[2] 胡振生. 融合案件要素的刑事案件罪名预测方法研究[D]. 广州: 广东财经大学, 2021.
HU Z S. Research on charge prediction method of criminal cases integrated legal elements[D]. Guangzhou: Guangdong University of Finance & Economics, 2021.
[3] YANG S, TONG S, ZHU G, et al. MVE-FLK: a multi-task legal judgment prediction via multi-view encoder fusing legal keywords[J]. Knowledge-Based Systems, 2022, 239: 107960.
[4] 陈文哲, 秦永彬, 黄瑞章, 等. 基于犯罪行为序列的法律条文预测方法[J]. 计算机工程与应用, 2019, 55(22): 245-249.
CHEN W Z, QIN Y B, HUANG R Z, et al. Legal text prediction method based on criminal behavior sequence[J]. Computer Engineering and Applications, 2019, 55(22): 245-249.
[5] 谢永峰. 基于深度学习的中文类案匹配技术研究[D]. 广州: 广东财经大学, 2023.
XIE Y F. Research on Chinese law similar case matching technology based on deep learning[D]. Guangzhou: Guangdong University of Finance & Economics, 2023.
[6] 张虎, 潘邦泽, 张颖. 基于深度学习的法律文书事实描述中判决要素抽取[J]. 计算机应用与软件, 2021, 38(9): 160-166.
ZHANG H, PAN B Z, ZHANG Y. Judgment elements extraction for factual description of legal documents based on deep learning[J]. Computer Applications and Software, 2021, 38(9): 160-166.
[7] 刘海顺, 王雷, 孙媛媛, 等. 基于预训练语言模型的案件要素识别方法[J]. 中文信息学报, 2021, 35(11): 91-100.
LIU H S, WANG L, SUN Y Y, et al. Case factor recognition based on pre-trained language models[J]. Journal of Chinese Information Processing, 2021, 35(11): 91-100.
[8] HUANG Y X, DAI W Z, YANG J, et al. Semi-supervised abductive learning and its application to theft judicial sentencing[C]//Proceedings of the 2020 IEEE International Conference on Data Mining, Sorrento, Nov 17-20, 2020. Piscataway: IEEE, 2020: 1070-1075.
[9] 黄辉, 秦永彬, 陈艳平, 等. 基于BERT阅读理解框架的司法要素抽取方法[J]. 大数据, 2021, 7(6): 19-29.
HUANG H, QIN Y B, CHEN Y P, et al. Legal element extraction method based on BERT reading comprehension framework[J]. Big Data Research, 2021, 7(6): 19-29.
[10] 窦文琦, 陈艳平, 秦永彬, 等. 基于机器阅读理解的案件要素识别方法[J]. 计算机工程与设计, 2023, 44(8): 2475-2481.
DOU W Q, CHEN Y P, QIN Y B, et al. Method for case element recognition based on machine reading comprehension[J]. Computer Engineering and Design, 2023, 44(8): 2475-2481.
[11] BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Advances in Neural Information Processing Systems 33, Dec 6-12, 2020: 1877-1901.
[12] HUANG Q, TAO M, ZHANG C, et al. Lawyer LLaMA technical report[EB/OL]. [2024-04-13]. https://arxiv.org/abs/2305.15062.
[13] CUI J, LI Z, YAN Y, et al. Chatlaw: open-source legal large language model with integrated external knowledge bases[EB/OL]. [2024-04-13]. https://arxiv.org/abs/2306.16092.
[14] ZHOU Z, SHI J X, SONG P X, et al. LawGPT: a Chinese legal knowledge-enhanced large language model[EB/OL]. [2024-07-21]. https://arxiv.org/abs/2406.04614.
[15] DAI Y, FENG D, HUANG J, et al. LAiW: a Chinese legal large language models benchmark (a technical report)[EB/OL]. [2024-04-13]. https://arxiv.org/abs/2310.05620.
[16] TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMA: open and efficient foundation language models[EB/OL]. [2024-04-13]. https://arxiv.org/abs/2302.13971.
[17] CAI W, JIANG J, WANG F, et al. A survey on mixture of experts[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2407.06204.
[18] DU Z X, QIAN Y J, LIU X, et al. GLM: general language model pretraining with autoregressive blank infilling[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, May 22-27, 2022. Stroudsburg: ACL, 2022: 320-335.
[19] LIU S Y, WANG C Y, YIN H, et al. DoRA: weight-decomposed low-rank adaptation[EB/OL]. [2024-04-13]. https://arxiv.org/abs/2402.09353.
[20] HU E J, SHEN Y, WALLIS P, et al. LoRA: low-rank adaptation of large language models[EB/OL]. [2024-04-13]. https://arxiv.org/abs/2106.09685.
[21] CUI L, WU Y, LIU J, et al. Template-based named entity recognition using BART[EB/OL]. [2024-04-13]. https://arxiv.org/abs/2106.01760.
[22] BRIAN L, AI-RFOU R, CONSTANT N. The power of scale for parameter-efficient prompt tuning[EB/OL]. [2024-04-13]. https://arxiv.org/abs/2104.08691.
[23] LIU P, YUAN W Z, FU J L, et al. Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing[J]. ACM Computing Surveys, 2023, 55(9): 1-35.
[24] MAO Y, MATHIAS L, HOU R, et al. UniPELT: a unified framework for parameter-efficient language model tuning[EB/OL]. [2024-04-13]. https://arxiv.org/abs/2110.07577.
[25] GOU Y, LIU Z, CHEN K, et al. Mixture of cluster-conditional LoRA experts for vision-language instruction tuning[EB/OL]. [2024-04-13]. https://arxiv.org/abs/2312.12379.
[26] LUO T, LEI J, LEI F, et al. MoELoRA: contrastive learning guided mixture of experts on parameter-efficient fine-tuning for large language models[EB/OL]. [2024-04-13]. https://arxiv.org/abs/2402.12851.
[27] WANG Y, AGARWAL S, MUKHERJEE S, et al. AdaMix: mixture-of-adaptations for parameter-efficient model tuning[EB/OL]. [2024-04-13]. https://arxiv.org/abs/2205.12410.
[28] BUEHLER E L, BUEHLER M J. X-LoRA: mixture of low-rank adapter experts, a flexible framework for large language models with applications in protein mechanics and molecular design[J]. APL Machine Learning, 2024, 2(2).
[29] ACHIAM J, ADLER S, AGARWAL S, et al. GPT-4 technical report[EB/OL]. [2024-04-13]. https://arxiv.org/abs/2303.08774. |