[1] ZHU Y, ZHAO J K, WANG Y N, et al. A review of human action recognition based on deep learning[J]. Acta Automa-tica Sinica, 2016, 42(6): 848-857.
朱煜, 赵江坤, 王逸宁, 等. 基于深度学习的人体行为识别算法综述[J]. 自动化学报, 2016, 42(6): 848-857.
[2] GKIOXARI G, GIRSHICK R B, DOLLAR P, et al. Detecting and recognizing human-object interactions[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pat-tern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 8359-8367.
[3] QI S Y, WANG W G, JIA B X, et al. Learning human-object interactions by graph parsing neural networks[C]//LNCS 11213: Proceedings of the 15th European Conference on Computer Vision, Munich, Sep 8-14, 2018. Cham: Springer, 2018: 407-423.
[4] VINYALS O, BLUNDELL C, LILLICRAP T P, et al. Match-ing networks for one shot learning[C]//Proceedings of the 29th Annual Conference on Neural Information Processing Systems, Barcelona, Dec 5-10, 2016. Red Hook: Curran Ass-ociates, 2016: 3630-3638.
[5] KOCH G, ZEMEL R, SALAKHUTDINOV R. Siamese ne-ural networks for one-shot image recognition[C]//Proceed-ings of the 32nd International Conference on Machine Lea-rning, Lille, Jul 6-11, 2015. Stroudsburg: ACL, 2015: 1-8.
[6] SUNG F, YANG Y X, ZHANG L, et al. Learning to compare: relation network for few-shot learning[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 1199-1208.
[7] SNELL J, SWERSKY K, ZEMEL R S. Prototypical networks for few-shot learning[C]//Proceedings of the 2017 Annual Conference on Neural Information Processing Systems, Long Beach, Dec 4-9, 2017. Red Hook: Curran Associates, 2017: 4077-4087.
[8] FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]//Proceedings of the 34th International Conference on Machine Learning, Sydney, Aug 6-11, 2017: 1126-1135.
[9] NICHOL A, ACHIAM J, SCHULMAN J. On first-order meta-learning algorithms[J]. arXiv:1803.02999, 2018.
[10] WANG F, JIANG M Q, QIAN C, et al. Residual attention network for image classification[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 6450-6458.
[11] WU L, WANG Y, LI X, et al. Deep attention-based spatially recursive networks for fine-grained visual recognition[J]. IEEE Transactions on Systems, Man, and Cybernetics, 2019, 49(5): 1791-1802.
[12] WU S S, YANG J F, SHAN Y, et al. Research on generative adversarial networks using twins attention mechanism[J]. Journal of Frontiers of Computer Science and Technology, 2020, 14(5): 833-840.
武随烁, 杨金福, 单义, 等. 使用孪生注意力机制的生成对抗网络的研究[J]. 计算机科学与探索, 2020, 14(5): 833-840.
[13] FU J, LIU J, TIAN H J, et al. Dual attention network for scene segmentation[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2018: 3146-3154.
[14] WEI Y C, FENG J S, LIANG X D, et al. Object region min-ing with adversarial erasing: a simple classification to sem-antic segmentation approach[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recogni-tion, Honolulu, Jul 21-26, 2017. Washington: IEEE Com-puter Society, 2017: 6488-6496.
[15] JI Z, XIONG K L, PANG Y W, et al. Video summarization with attention-based encoder-decoder networks[J]. IEEE Transactions on Circuits and Systems for Video Technology,2020, 30(6): 1709-1717.
[16] YU Y L, JI Z, FU Y W, et al. Stacked semantic-guided atten-tion model for fine-grained zero-shot learning[C]//Proceed-ings of the Annual Conference on Neural Information Pro-cessing Systems 2018, Montréal, Dec 3-8, 2018. Red Hook: Curran Associates, 2018: 5998-6007.
[17] XING C, ROSTAMZADEH N, ORESHKIN B N, et al. Ada-ptive cross-modal few-shot learning[J]. arXiv:1902.07104, 2019.
[18] JI Z, LI H H, HE Y Q. Zero-shot multi-label image classi-fication based on deep instance differentiation[J]. Journal of Frontiers of Computer Science and Technology, 2019, 13(1): 97-105.
冀中, 李慧慧, 何宇清. 基于深度示例差异化的零样本多标签图像分类[J]. 计算机科学与探索, 2019, 13(1): 97-105.
[19] JI Z, WANG H R, YU Y L, et al. A decadal survey of zero-shot image classification[J]. Science in China: Information Sciences, 2019, 49(10): 1299-1320.
冀中, 汪浩然, 于云龙, 等. 零样本图像分类综述: 十年进展[J]. 中国科学: 信息科学, 2019, 49(10): 1299-1320.
[20] CHAO Y W, WANG Z, HE Y G, et al. HICO: a benchmark for recognizing human-object interactions in images[C]// Proceedings of the 2015 IEEE International Conference on Computer Vision, Santiago, Dec 7-13, 2015. Washington:IEEE Computer Society, 2015: 1017-1025.
[21] LE D T, UIJLINGS J R R, BERNARDI R. TUHOI: trento universal human object interaction dataset[C]//Proceedings of the 3rd Workshop on Vision and Language, Dublin, Aug 23, 2014. Stroudsburg: ACL, 2014: 17-24.
[22] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Con-ference on Computer Vision and Pattern Recognition, Las Vegas, Jun 27-30, 2016. Washington: IEEE Computer Soc-iety, 2016: 770-778.
[23] MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their composition-ality[C]//Proceedings of the 27th Annual Conference on Neural Information Processing Systems 2013, Lake Tahoe, Dec 5-8, 2013. Red Hook: Curran Associates, 2013: 3111-3119.
[24] LI W B, WANG L, XU J L, et al. Revisiting local descriptor based image-to-class measure for few-shot learning[C]// Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 7260-7268.
[25] LIU L, ZHOU T Y, LONG G D, et al. Learning to propagate for graph meta-learning[C]//Proceedings of the 32nd Annual Conference on Neural Information Processing Systems, Van-couver, Dec 8-14, 2019. Red Hook: Curran Associates, 2019: 1037-1048.
[26] LI H Y, DONG W M, MEI X, et al. LGM-Net: learning to generate matching networks for few shot learning[C]//Pro-ceedings of the 36th International Conference on Machine Learning, Long Beach, Jun 10-15, 2019. New York: ACM, 2019: 3825-3834. |