[1] Xu K, Feng Y S, Zhao D Y, et al. Automatic understanding of natural language questions for querying Chinese knowledge bases[J]. Acta Scientiarum Naturalium Universitatis Pekinensis, 2014, 50(1): 85-92.许坤, 冯岩松, 赵东岩, 等. 面向知识库的中文自然语言问句的语义理解[J]. 北京大学学报(自然科学版), 2014, 50(1): 85-92.
[2] Angelino E, Larus-Stone N, Alabi D, et al. Learning certi-fiably optimal rule lists for categorical data[J]. Journal of Machine Learning Research, 2017, 18(1): 8753-8830.
[3] Chen H S, Liu X R, Yin D W, et al. A survey on dialogue sys-tems: recent advances and new frontiers[J]. SIGKDD Explo-rations, 2017, 19(2): 25-35.
[4] Li Y L, Yan Y H. Research on execution strategy about stati-stical spoken language understanding[J]. Journal of Frontiers of Computer Science and Technology, 2017, 11(6): 980-987.李艳玲, 颜永红. 统计中文口语理解执行策略的研究[J]. 计算机科学与探索, 2017, 11(6): 980-987.
[5] Surendran D, Levow G A. Dialog act tagging with support vector machines and hidden Markov models[C]//Proceedings of the 9th International Conference on Spoken Language Processing, Pittsburgh, Sep 17-21, 2006: 1950-1953.
[6] Keize S, op den Akker R, Nijholt A. Dialogue act recognition with Bayesian networks for Dutch dialogues[C]//Proceedings of the 3rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Philadelphia, Jul 11-12, 2002. Strou-dsburg: ACL, 2002: 88-94.
[7] Ali S A, Sulaiman N, Mustapha A, et al. Improving accuracy of intention-based response classification using decision tree[J]. Information Technology Journal, 2009, 8(6): 923-928.
[8] Laffery J, McCallum A, Pereira F C N. Conditional random fields: probabilistic models for segmenting and labeling seq-uence data[C]//Proceedings of the 18th International Conference on Machine Learning.Williamstoun: ICML, 2001: 282-289.
[9] Zhou G D, Su J. Named entity recognition using an HMM-based chunk tagger[C]//Proceedings of the 40th Annual Meeting on ACL, 2002: 473-480.
[10] Borthwick A E. A maximum entropy approach to named entity recognition[D]. New York: New York University, 1999.
[11] Zhang Y C, Yang Y, Jiang R, et al. Commercial intelligence entity recognition model based on BiLSTM-CRF[J]. Computer Engineering, 2019, 45(5): 308-314.张应成, 杨洋, 蒋瑞, 等. 基于BiLSTM-CRF的商情实体识别模型[J]. 计算机工程, 2019, 45(5): 308-314.
[12] Zhang R B, Liu J Y, He X. Named entity recognition for vulnerabilities based on BLSTM-CRF model[J]. Journal of Sichuan University (Natural Science Edition), 2019, 56(3): 469-475.张若彬, 刘嘉勇, 何祥. 基于BLSTM-CRF模型的安全漏洞领域命名实体识别[J]. 四川大学学报(自然科学版), 2019, 56(3): 469-475.
[13] Wang L L, Wumaier A, Yibulayin T, et al. Uyghur named entity recognition based on deep neural network[J]. Journal of Chi-nese Information Processing, 2019, 33(3): 64-70.王路路, 艾山·吾买尔, 吐尔根·依布拉音, 等. 基于深度神经网络的维吾尔文命名实体识别研究[J]. 中文信息学报, 2019, 33(3): 64-70.
[14] Guo D, Tür G, Yih W, et al. Joint semantic utterance classi-fication and slot filling with recursive neural networks[C]//Proceedings of the 2014 IEEE Spoken Language Technology Workshop, South Lake Tahoe, Dec 7-10, 2014. Piscataway: IEEE, 2014: 554-559.
[15] Hakkani-Tür D, Tür G, ?elikyilmaz A, et al. Multi-domain joint semantic frame parsing using bi-directional RNN-LSTM[C]//Proceedings of the 17th Annual Conference of the Inter-national Speech Communication Association, San Francisco, Sep 8-12, 2016: 715-719.
[16] Hua B T, Yuan Z X, Xiao W M, et al. Joint slot filling and intent detection with BLSTM-CNN-CRF[J]. Computer Engi-neering and Applications, 2019, 55(9): 139-143.华冰涛, 袁志祥, 肖维民, 等. 基于BLSTM-CNN-CRF模型的槽填充与意图识别[J]. 计算机工程与应用, 2019, 55(9): 139-143.
[17] Jeong M, Lee G G. Jointly predicting dialog act and named entity for spoken language understanding[C]//Proceedings of the 2006 IEEE ACL Spoken Language Technology Workshop, Palm Beach, Dec 10-13, 2006. Piscataway: IEEE, 2006: 66-69.
[18] Xu P Y, Sarikaya R. Convolutional neural network based triangular CRF for joint intent detection and slot filling[C]//Proceedings of the 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, Olomouc, Dec 8-12, 2013. Piscataway: IEEE, 2013: 78-83.
[19] Zhang X D, Wang H F. A joint model of intent determi-nation and slot filling for spoken language understanding[C]//Proceedings of the 25th International Joint Conference on Artificial Intelligence, New York, Jul 9-15, 2016. Menlo Park: AAAI, 2016: 2993-2999.
[20] Liu B, Lane I. Attention-based recurrent neural network models for joint intent detection and slot filling[C]//Proceedings of the 17th Annual Conference of the International Speech Com-munication Association, San Francisco, Sep 8-12, 2016: 685-689.
[21] Tür G, Hakkani-Tür D, Heck L P. What is left to be under-stood in ATIS?[C]//Proceedings of the 2010 IEEE Spoken Language Technology Workshop, Berkeley, Dec 12-15, 2010. Piscataway: IEEE, 2010: 19-24.
[22] Goo C W, Gao G, Hsu Y K, et al. Slot-gated modeling for joint slot filling and intent prediction[C]//Proceedings of the 2018 Conference of the North American Chapter of the ACL: Human Language Technologies, New Orleans, Jun 1-6, 2018. Stroudsburg: ACL, 2018: 753-757.
[23] Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space[J]. arXiv:1301.3781, 2013.
[24] Turian J P, Ratinov L A, Bengio Y. Word representations: a simple and general method for semi-supervised learning[C]// Proceedings of the 48th Annual Meeting of the ACL, Uppsala, Jul 11-16, 2010. Stroudsburg: ACL, 2010: 384-394.
[25] Lai S W, Liu K, He S Z, et al. How to generate a good word embedding[J]. IEEE Intelligent Systems, 2016, 31(6): 5-14.
[26] Gers F A, Schmidhuber J, Cummins F. Learning to forget: continual prediction with LSTM[J]. Neural Computation, 2000, 12(10): 2451-2471.
[27] Yao K S, Peng B L, Zhang Y, et al. Spoken language under-standing using long short-term memory neural networks[C]//Proceedings of the 2014 IEEE Spoken Language Tech-nology Workshop, South Lake Tahoe, Dec 7-10, 2014. Pis-cataway: IEEE, 2014: 189-194.
[28] Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate[J]. arXiv:1409.0473, 2014.
[29] Zhang C. Research on text classification technology based on attention-based LSTM model[D]. Nanjing: Nanjing Uni-versity, 2016.张冲. 基于Attention-Based LSTM模型的文本分类技术的研究[D]. 南京: 南京大学, 2016.
[30] Devlin J, Chang M W, Lee K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[J]. arXiv:1810.04805, 2018. |