[1] TURING A M I. Computing machinery and intelligence[J]. Mind, 1950(236): 433-460.
[2] LEHNERT W G. The process of question and answering[M]. New Haven: Yale University, 1977.
[3] SAEIDI M, BARTOLO M, LEWIS P, et al. Interpretation of natural language rules in conversational machine reading[J]. arXiv:1809.01494, 2018.
[4] REDDY S, CHEN D, MANNING C D. CoQA: a conversational question answering challenge[J]. Transactions of the Association for Computational Linguistics, 2019, 7: 249-266.
[5] CHOI E, HE H, IYYER M, et al. QuAC: question answering in context[C]//Proceedings of the 2018 Conference on Emp-irical Methods in Natural Language Processing and the 8th International Joint Conference on Natural Language Pro-cessing, Brussels, Nov 2-4, 2018. Stroudsburg: ACL, 2018: 2174-2184.
[6] SEO M J, KEMBHAVI A, FARHADI A, et al. Bidirectional attention flow for machine comprehension[C]//Proceedings of the 5th International Conference on Learning Representations, Toulon, Apr 24-26, 2017: 25-32.
[7] HERMANN K M, KOCISKY T, GREFENSTETTE E, et al. Teaching machines to read and comprehend[C]//Proceedings of the 28th Annual Conference on Neural Information Proc-essing Systems, Montreal, Dec 7-12, 2015. Red Hook: Curran Associates, 2015: 1693-1701.
[8] IYYER M, YIH W, CHANG M, et al. Search-based neural structured learning for sequential question answering[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Jul 30-Aug 4, 2017. Stroudsburg: ACL, 2017: 1821-1831.
[9] TALMOR A, BERANT J. The Web as a knowledge-base for answering complex questions[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Jun 1-6, 2018. Stroudsburg: ACL, 2018: 641-651.
[10] SAHA A, PAHUJA V, KHAPRA M M, et al. Complex sequential question answering: towards learning to converse over linked question answer pairs with a knowledge graph[C]//Proceed-ings of the 32nd Association for the Advancement of Artificial Intelligence, New Orleans, Feb 2-7, 2018: 705-713.
[11] RICHARDSON M, BURGES C J C, RENSHAW E. MCTest: a challenge dataset for the open-domain machine compre-hension of text[C]//Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Oct 18-21, 2013. Stroudsburg: ACL, 2013: 193-203.
[12] LAI G K, XIE Q Z, LIU H X, et al. RACE: large-scale reading comprehension dataset from examinations[C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Sep 9-11, 2017. Strouds-burg: ACL, 2017: 785-794.
[13] YATSKAR M. A qualitative comparison of CoQA, SQuAD 2.0 and QuAC[J]. arXiv:1809.10735, 2018.
[14] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[J]. arXiv:1810.04805, 2018.
[15] PENNINGTON J, SOCHER R, MANNING C D. GloVe: GlobalVectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Oct 25-29, 2014. Stroudsburg: ACL, 2014: 1532-1543.
[16] KIM Y. Convolutional neural networks for sentence classifi-cation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Oct 25-29, 2014. Stroudsburg: ACL, 2014: 1746-1751.
[17] LEE K, SALANT S, KWIATKOWSKI T, et al. Learning recurrent span representations for extractive question answering[J]. arXiv:1611.01436, 2016.
[18] WU H C, LUK R W P, WONG K F, et al. Interpreting TF-IDF term weights as making relevance decisions[J]. ACM Trans-actions on Information Systems, 2008, 26(3): 1-37.
[19] SRIVASTAVA R K, GREFF K, SCHMIDHUBER J. Highway networks[J]. arXiv:1505.00387, 2015.
[20] ZHU C G, ZENG M, HUANG X D. SDNet: contextualized attention-based deep network for conversational question answering[J]. arXiv:1812.03593, 2018.
[21] SENNRICH R, HADDOW B, BIRCH A. Neural machine translation of rare words with subword units[C]//Proceedings of the 54th Annual Meeting of the Association for Computa-tional Linguistics, Berlin, Aug 7-12, 2016. Stroudsburg: ACL, 2016: 1715-1725.
[22] QU C, YANG L, QIU M, et al. BERT with history answer embedding for conversational question answering[C]//Pro-ceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Paris, Jul 21-25, 2019. New York: ACM, 2019: 1133-1136.
[23] QU C, YANG L, QIU M, et al. Attentive history selection for conversational question answering[C]//Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, Nov 3-7, 2019. New York: ACM, 2019: 1391-1400.
[24] BAHDANAU D, CHO K, BENGIO Y. Neural machine translation by jointly learning to align and translate[J]. arXiv: 1409.0473, 2014.
[25] CUI Y, CHEN Z, WEI S, et al. Attention-over-attention neural networks for reading comprehension[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Jul 30-Aug 4, 2017. Stroudsburg: ACL, 2017: 593-602.
[26] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. arXiv:1706.03762, 2017.
[27] HOCHREITER S, SCHMIDHUBER J. Long short-term mem-ory[J]. Neural Computation, 1997, 9(8): 1735-780.
[28] HUANG H, ZHU C, SHEN Y, et al. FusionNet: fusing via fully-aware attention with application to machine compre-hension[C]//Proceedings of the 6th International Conference on Learning Representations, Vancouver, Apr 30-May 3, 2018: 66-76.
[29] HUANG H Y, CHOI E, YIH W T. FlowQA: grasping flow in history for conversational machine comprehension[C]//Proceedings of the 7th International Conference on Learning Representations, New Orleans, May 6-9, 2019: 1354-1364.
[30] CHEN Y, WU L F, ZAKI M J. Graphflow: exploiting con-versation flow with graph neural networks for conversational machine comprehension[C]//Proceedings of the 29th Inter-national Joint Conference on Artificial Intelligence, Yokohama, Jul 2020: 1452-1462.
[31] CHO K, VAN MERRIENBOER B, GULCEHRE C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Pro-cessing, Doha, Oct 25-29, 2014. Stroudsburg: ACL, 2014: 1724-1734.
[32] LI Y, TARLOW D, BROCKSCHMIDT M, et al. Gated graph sequence neural networks[J]. arXiv:1511.05493, 2015.
[33] CHEN D, FISCH A, WESTON J, et al. Reading wikipedia to answer open-domain questions[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Lin-guistics, Vancouver, Jul 30-Aug 4, 2017. Stroudsburg: ACL, 2017: 1870-1879.
[34] SEE A, LIU P J, MANNING C D. Get to the point: summa-rization with pointer-generator networks[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Jul 30-Aug 4, 2017. Stroudsburg: ACL, 2017: 1073-1083.
[35] YEH Y T, CHEN Y N. Flowdelta: modeling flow information gain in reasoning for conversational machine comprehen-sion[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and 9th Interna-tional Joint Conference on Natural Language Processing, Hong Kong, China, Nov 3-7, 2019. Stroudsburg: ACL, 2019: 86-90.
[36] ZHANG X. MC2: multi-perspective convolutional cube for conversational machine reading comprehension[C]//Procee-dings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Jul 28-Aug 2, 2019. Stroudsburg: ACL, 2019: 6185-6190.
[37] OHSUGI Y, SAITO I, NISHIDA K, et al. A simple but effective method to incorporate multi-turn context with BERT for conversational machine comprehension[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Jul 28-Aug 2, 2019. Stroudsburg: ACL, 2019: 11-17.
[38] GONG H, SHEN Y, YU D, et al. Recurrent chunking mech-anisms for long-text machine reading comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Jul 5-10, 2020. Stroudsburg: ACL, 2020: 6751-6761. |