[1] GOU L, ZHOU M X, YANG H H. KnowMe and ShareMe: understanding automatically discovered personality traits from social media and user sharing preferences[C]//Proceedings of the CHI Conference on Human Factors in Computing Systems, Toronto, Apr 26-May 1, 2014. New York: ACM, 2014: 955-964.
[2] TAUSCZIK Y R, PENNEBAKER J W. The psychological meaning of words: LIWC and computerized text analysis methods[J]. Journal of Language and Social Psychology, 2010, 29(1): 24-54.
[3] BARRICK M R, MOUNT M K. The big five personality dimensions and job performance: a meta-analysis[J]. Personnel Psychology, 1991, 44(1): 1-26.
[4] FORD J K. Brands laid bare: using market research for evidence-based brand management[M]. Hoboken: John Wiley & Sons, 2005.
[5] HINTON G E, OSINDERO S, TEH Y W. A fast learning algorithm for deep belief nets[J]. Neural Computation, 2006, 18(7): 1527-1554.
[6] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Jun 27-30, 2016. Washington: IEEE Computer Society, 2016: 770-778.
[7] XING J, HEEGER D J. Measurement and modeling of center-surround suppression and enhancement[J]. Vision Research, 2001, 41(5): 571-583.
[8] COLLOBERT R, WESTON J. A unified architecture for natural language processing: deep neural networks with multitask learning[C]//Proceedings of the 25th International Conference on Machine Learning, Helsinki, Jun 5-9, 2008. New York: ACM, 2008: 160-167.
[9] KIM Y. Convolutional neural networks for sentence classification[J]. arXiv:1408.5882, 2014.
[10] LAI S W, XU L H, LIU K, et al. Recurrent convolutional neural networks for text classification[C]//Proceedings of the 29th AAAI Conference on Artificial Intelligence, Austin, Jan 25-30, 2015. Menlo Park: AAAI, 2015: 2267-2273.
[11] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[J]. arXiv:1810.04805, 2018.
[12] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 30th Annual Conference on Neural Information Processing Systems, Long Beach, Dec 4-9, 2017. Red Hook: Curran Associates, 2017: 5998-6008.
[13] YANG Z C, YANG D Y, DYER C, et al. Hierarchical attention networks for document classification[C]//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, Jun 12-17, 2016. Stroudsburg: ACL, 2016: 1480-1489.
[14] WANG B, ZHANG X, ZHOU X, et al. A gated dilated convolution with attention model for clinical cloze-style reading comprehension[J]. International Journal of Environmental Research and Public Health, 2020, 17: 1323.
[15] ADHIKARI A, RAM A, TANG R, et al. Docbert: BERT for document classification[J]. arXiv:1904.08398, 2019.
[16] KWANTES P J, DERBENTSEVA N, LAM Q, et al. Assessing the big five personality traits with latent semantic analysis[J]. Personality and Individual Differences, 2016, 102: 229-233.
[17] WEI H H, ZHANG F Z, YUAN N J, et al. Beyond the words: predicting user personality from heterogeneous information[C]//Proceedings of the 10th ACM International Conference on Web Search and Data Mining, Cambridge, Feb 6-10, 2017. New York: ACM, 2017: 305-314.
[18] MAJUMDER N, PORIA S, GELBUKH A, et al. Deep learning-based document modeling for personality detection from text[J]. IEEE Intelligent Systems, 2017, 32(2): 74-79.
[19] SUN C, QIU X, XU Y, et al. How to fine-tune BERT for text classification?[C]//LNCS 11856: Proceedings of the 18th China National Conference on Chinese Computational Linguistics, Kunming, Oct 18-20, 2019. Berlin, Heidelberg: Springer, 2019: 194-206.
[20] FERNáNDEZ-DELGADO M, CERNADAS E, BARRO S, et al. Do we need hundreds of classifiers to solve real world classification problems?[J]. The Journal of Machine Learning Research, 2014, 15(1): 3133-3181. |