[1] |
GHOSAL D, AKHTAR M S, CHAUHAN D S, et al. Contextual inter-modal attention for multi-modal sentiment analysis[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Oct 31-Nov 4, 2018. Stroudsburg: ACL, 2018: 3454-3466.
|
[2] |
林敏鸿, 蒙祖强. 基于注意力神经网络的多模态情感分析[J]. 计算机科学, 2020, 47(S2):508-514.
|
|
LIN M H, MENG Z Q. Multimodal sentiment analysis based on attention neural network[J]. Computer Science, 2020, 47(S2):508-514.
|
[3] |
刘继明, 张培翔, 刘颖, 等. 多模态的情感分析技术综述[J]. 计算机科学与探索, 2021, 15(7):1165-1182.
|
|
LIU J M, ZHANG P X, LIU Y, et al. Summary of multi-modal sentiment analysis technology[J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(7):1165-1182.
|
[4] |
何俊, 张彩庆, 李小珍, 等. 面向深度学习的多模态融合技术研究综述[J]. 计算机工程, 2020, 46(5):1-11.
|
|
HE J, ZHANG C Q, LI X Z, et al. Survey of research on multimodal fusion technology for deep learning[J]. Computer Engineering, 2020, 46(5):1-11.
|
[5] |
PORIA S, CAMBRIA E, HAZARIKA D, et al. Multi-level multiple attentions for contextual multimodal sentiment analysis[C]// Proceedings of the 2017 IEEE International Conference on Data Mining, New Orleans, Nov 18-21, 2017. Washington: IEEE Computer Society, 2017: 1033-1038.
|
[6] |
KUMAR A, VEPA J. Gated mechanism for attention based multi modal sentiment analysis[C]// Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, Barcelona, May 4-8, 2020. Piscataway:IEEE, 2020: 4477-4481.
|
[7] |
LIN Z, FENG M, SANTOS C N, et al. A structured self-attentive sentence embedding[J]. arXiv: 1703. 03130, 2017.
|
[8] |
ZADEH A, ZELLERS R, PINCUS E, et al. Multimodal senti-ment intensity analysis in videos: facial gestures and verbal messages[J]. IEEE Intelligent Systems, 2016, 31(6):82-88.
DOI
URL
|
[9] |
CHEN M H, WANG S, LIANG P P, et al. Multimodal sentiment analysis with word-level fusion and reinforcement learning[C]// Proceedings of the 19th ACM International Conference on Multimodal Interaction, Glasgow, Nov 13-17, 2017. New York: ACM, 2017: 163-171.
|
[10] |
张亚洲, 戎璐, 宋大为, 等. 多模态情感分析研究综述[J]. 模式识别与人工智能, 2020, 33(5):426-438.
|
|
ZHANG Y Z, RONG L, SONG D W, et al. A survey on multimodal sentiment analysis[J]. Pattern Recognition and Artificial Intelligence, 2020, 33(5):426-438.
|
[11] |
ZADEH A, CHEN M H, PORIA S, et al. Tensor fusion network for multimodal sentiment analysis[C]// Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Sep 9-11, 2017. Stroudsburg: ACL, 2017: 1103-1114.
|
[12] |
ZADEH A, LIANG P P, PORIA S, et al. Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Jul 15-20, 2018. Stroudsburg: ACL, 2018: 2236-2246.
|
[13] |
PORIA S, CAMBRIA E, HAZARIKA D, et al. Context-dependent sentiment analysis in user-generated videos[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Jul 30-Aug 4, 2017. Stroudsburg: ACL, 2017: 873-883.
|
[14] |
MAJUMDER N, PORIA S, HAZARIKA D, et al. Dialogue: an attentive RNN for emotion detection in conversations[C]// Proceedings of the 2019 AAAI Conference on Artificial Intelligence, Honolulu, Jan 27-Feb 1, 2019. Palo Alto: AAAI, 2019: 6818-6825.
|
[15] |
SHENOY A, SARDANA A. Multilogue-Net: a context aware RNN for multi-modal emotion detection and sentiment analysis in conversation[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seattle, Jul 5-10, 2020. Stroudsburg: ACL, 2020: 19-28.
|
[16] |
KIM T, LEE B. Multi-attention multimodal sentiment analysis[C]// Proceedings of the 2020 International Conference on Multimedia Retrieval, Dublin, Jun 8-11, 2020. New York: ACM, 2020: 436-441.
|
[17] |
ZADEH A, LIANG P P, PORIA S, et al. Multi-attention recurrent network for human communication comprehension[C]// Proceedings of the 2018 AAAI Conference on Artificial Intelligence, New Orleans, Feb 2-7, 2018. Palo Alto: AAAI, 2018: 5642-5649.
|
[18] |
XI C, LU G M, YAN J J. Multimodal sentiment analysis based on multi-head attention mechanism[C]// Proceedings of the 4th International Conference on Machine Learning and Soft Computing, Haiphong City, Jan 17-19, 2020. New York: ACM, 2020: 34-39.
|
[19] |
VERMA S, WANG J W, GE Z F, et al. Deep-HOSeq: deep higher order sequence fusion for multimodal sentiment analysis[J]. arXiv: 2010. 08218, 2020.
|
[20] |
TACHIBANA H, UENOYAMA K, AIHARA S. Efficiently trainable text-to-speech system based on deep convolutional networks with guided attention[C]// Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing, Calgary, Apr 15-20, 2018. Piscataway: IEEE, 2018: 4784-4788.
|
[21] |
EYBEN F, WÖLLMER M, SCHULLER B W. Opensmile: the munich versatile and fast open-source audio feature extractor[C]// Proceedings of the 18th ACM International Conference on Multimedia, Firenze, Oct 25-29, 2010. New York: ACM, 2010: 1459-1462.
|