[1] 汤一明, 刘玉菲, 黄鸿. 视觉单目标跟踪算法综述[J]. 测控技术, 2020, 39(8): 21-34.
TANG Y M, LIU Y F, HUANG H. Survey of single-target visual tracking algorithms[J]. Measurement & Control Tech-nology, 2020, 39(8): 21-34.
[2] HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583-596.
[3] BERTINETTO L, VALMADRE J, HENRIQUES J F, et al. Fully-convolutional siamese networks for object tracking[C]//LNCS 9914: Proceedings of the 14th ECCV Workshops on Computer Vision, Amsterdam, Oct 8-10, 15-16, 2016. Cham: Springer, 2016: 850-865.
[4] HE A F, LUO C, TIAN X M, et al. A twofold siamese net-work for real-time object tracking[C]//Proceedings of the 2018 IEEE International Conference on Computer Vision, Salt Lake City, Jun 18-23, 2018. Washington: IEEE Computer Society, 2018: 4834-4843.
[5] ZHANG Z P, PENG H W. Deeper and wider siamese net-works for real-time visual tracking[C]//Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recogni-tion, Long Beach, Jun 15-21, 2019. Piscataway: IEEE, 2019: 4591-4600.
[6] LI D D, WEN G J, KUAI Y L, et al. End-to-end feature inte-gration for correlation filter tracking with channel attention[J]. IEEE Signal Processing Letters, 2018, 25(12): 1815-1819.
[7] WANG Q, TENG Z, XING J, et al. Learning attentions: residual attentional siamese network for high performance online visual tracking[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Washington: IEEE Com-puter Society, 2018: 4854-4863.
[8] YU Y C, XIONG Y L, HUANG W L, et al. Deformable siamese attention networks for visual object tracking[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, Jun 13-19, 2020. Pisca-taway: IEEE, 2020: 6727-6736.
[9] MA N, ZHANG X, ZHENG H T, et al. ShuffleNetV2: prac-tical guidelines for efficient CNN architecture design[C]//LNCS 11218: Proceedings of the 15th European Conference on Computer Vision, Munich, Sep 8-14, 2018. Cham: Sprin-ger, 2018: 122-138.
[10] SANDLER M, HOWARD A, ZHU M, et al. MobileNetV2: Inverted residuals and linear bottlenecks[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pat-tern Recognition, Salt Lake City, Jun 18-23, 2018. Washington: IEEE Computer Society, 2018: 4510-4520.
[11] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Image-Net classifification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.
[12] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Con-ference on Computer Vision and Pattern Recognition, Las Vegas, Jun 27-30, 2016. Washington: IEEE Computer Society, 2016: 770-778.
[13] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv:1409.1556, 2014.
[14] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Washington: IEEE Computer Society, 2018: 7132-7141.
[15] WOO S, PARK J, LEE J, et al. CBAM: convolutional block attention module[C]//LNCS 11211: Proceedings of the 15th European Conference on Computer Vision, Munich, Sep 8-14, 2018. Cham: Springer, 2018: 3-19.
[16] PARK J, WOO S, LEE J, et al. BAM: bottleneck attention module[J]. arXiv:1807.06514, 2018.
[17] AGARAP A F. Deep learning using rectified linear units (ReLU)[J]. arXiv:1803.08375, 2018.
[18] HE K, ZHANG X, REN S, et al. Delving deep into rectifi-ers: surpassing human-level performance on ImageNet classi-fication[J]. arXiv:1502.01852, 2015.
[19] DUMOULIN V, VISIN F. A guide to convolution arithmetic for deep learning[J]. arXiv:1603.07285, 2016.
[20] MA C, HUANG J, YANG X, et al. Hierarchical convolu-tional features for visual tracking[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision, Santi-ago, Dec 7-13, 2015. Washington: IEEE Computer Society, 2015: 3074-3082.
[21] LI B, YAN J, WU W, et al. High performance visual trac-king with siamese region proposal network[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pat-tern Recognition, Salt Lake City, Jun 18-23, 2018. Washing-ton: IEEE Computer Society, 2018: 8971-8980.
[22] YANG T Y, CHAN A B. Learning dynamic memory net-works for object tracking[C]//LNCS 11213: Proceedings of the 15th European Conference on Computer Vision, Munich, Sep 8-14, 2018. Cham: Springer, 2018: 153-169.
[23] WANG N, SONG Y B, MA C, et al. Unsupervised deep tracking[C]//Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 15-21, 2019. Piscataway: IEEE, 2019: 1308-1317.
[24] WANG Q, GAO J, XING J, et al. DCFNet: discriminant correlation filters network for visual tracking[J]. arXiv:1704.04057, 2017.
[25] DONG X P, SHEN J B. Triplet loss in siamese network for object tracking[C]//LNCS 11217: Proceedings of the 15th European Conference on Computer Vision, Munich, Sep 8-14, 2018. Cham: Springer, 2018: 472-488.
[26] VALMADRE J, BERTINETTO L, HENRIQUES J F, et al. End-to-end representation learning for correlation filter based tracking[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 5000-5008.
[27] GUO Q, FENG W, ZHOU C, et al. Learning dynamic sia-mese network for visual object tracking[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Oct 22-29, 2017. Washington: IEEE Computer Society, 2017: 1781-1789.
[28] ABDELPAKEY M, SHEHATA M, MOHAMED M. DensSiam: end-to-end densely-siamese network with self-attention model for object tracking[J]. arXiv:1809.02714, 2018.
[29] BERTINETTO L, VALMADRE J, GOLODETZ S, et al. Staple: complementary learners for real-time tracking[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Jun 27-30, 2016. Was-hington: IEEE Computer Society, 2016: 1401-1409. |