[1] 刘艺, 李蒙蒙, 郑奇斌, 等. 视频目标跟踪算法综述[J]. 计算机科学与探索, 2022, 16(7): 1504-1515.
LIU Y, LI M M, ZHENG Q B, et al. Survey on video object tracking algorithms[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(7): 1504-1515.
[2] 赵运基, 范存良, 张新良. 融合多特征和通道感知的目标跟踪算法[J]. 计算机科学与探索, 2022, 16(6): 1417-1428.
ZHAO Y J, FAN C L, ZHANG X L. Object tracking algo-rithm with fusion of multi-feature and channel awareness[J]. Journal of Frontiers of Computer Science and Techno-logy, 2022, 16(6): 1417-1428.
[3] 张晶, 黄浩淼. 结合重检测机制的多卷积层特征响应跟踪算法[J]. 计算机科学与探索, 2021, 15(3): 533-544.
ZHANG J, HUANG H M. Multi-convolutional layer feature response tracking algorithm combined with re-detection me-chanism[J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(3): 533-544.
[4] 程世龙, 谢林柏, 彭力. 梯度导向的通道选择目标跟踪算法[J]. 计算机科学与探索, 2022, 16(3): 649-660.
CHENG S L, XIE L B, PENG L. Gradient-guided object trac-king algorithm with channel selection[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(3): 649-660.
[5] 李彪, 孙瑾, 李星达, 等. 自适应特征融合的相关滤波跟踪算法[J]. 计算机工程与应用, 2022, 58(9): 208-218.
LI B, SUN J, LI X D, et al. Correlation filter target tracking based on adaptive multi-feature fusion[J]. Computer Enginee-ring and Applications, 2022, 58(9): 208-218.
[6] 茅正冲, 陈海东. 自适应尺度的上下文感知相关滤波跟踪算法[J]. 计算机工程与应用, 2021, 57(3): 168-174.
MAO Z C, CHEN H D. Adaptive scale context-aware corre-lation filter tracking algorithm[J]. Computer Engineering and Applications, 2021, 57(3): 168-174.
[7] 张艳琳, 钱小燕, 张淼, 等. 自适应多特征融合相关滤波目标跟踪[J]. 中国图象图形学报, 2020, 25(6): 1160-1170.
ZHANG Y L, QIAN X Y, ZHANG M, et al. Correlation filter target tracking algorithm based on adaptive multifeature fusion [J]. Chinese Journal of Image and Graphics, 2020, 25(6): 1160-1170.
[8] BERTINETTO L, VALMADRE J, HENRIQUES J F, et al. Fully-convolutional siamese networks for object tracking[C]//LNCS 9914: Proceedings of the 14th European Conference on Computer Vision, Amsterdam, Oct 8-16, 2016. Cham: Springer, 2016: 850-865.
[9] KRIZHEVSKY A, SUTSKRVER I, HINTON G. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.
[10] LI B, YAN J J, WU W, et al. High performance visual tracking with siamese region proposal network[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Re-cognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 8971-8980.
[11] LI B, WU Q, ZHANG F Y, et al. SiamRPN++: evolution of siamese visual tracking with very deep networks[C]//Procee-dings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscata-way: IEEE, 2019: 4282-4291.
[12] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Jun 27-30, 2016. Washington: IEEE Computer Society, 2016: 770-778.
[13] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems, Long Beach, Dec 4-9, 2017: 5998-6008.
[14] CHEN X, YAN B, ZHU J W, et al. Transformer tracking[C]//Proceedings of the 2021 IEEE Conference on Computer Vision and Pattern Recognition, Jun 19-25, 2021. Piscataway: IEEE, 2021: 8126-8135.
[15] WANG N, ZHOU W G, WANG J, et al. Transformer meets tracker: exploiting temporal context for robust visual tracking[C]//Proceedings of the 2021 IEEE Conference on Computer Vision and Pattern Recognition, Jun 19-25, 2021. Piscataway: IEEE, 2021: 1571-1580.
[16] BHAT G, DANELLJAN M, GOOL L V, et al. Learning dis-criminative model prediction for tracking[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seoul, Oct 27-Nov 2, 2019. Piscataway: IEEE, 2019: 6182-6191.
[17] YAN B, PENG H W, FU J L, et al. Learning spatio-temporal transformer for visual tracking[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Oct 10-17, 2021. Piscataway: IEEE, 2021: 10428-10437.
[18] LIU Z, LIN Y T, CAO Y, et al. Swin transformer: hierar-chical vision transformer using shifted windows[C]//Procee-dings of the 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Oct 10-17, 2021. Piscataway: IEEE, 2021: 9992-10002.
[19] LIN L T, FAN H, XU Y, et al. SwinTrack: a simple and strong baseline for transformer tracking[J]. arXiv:2112.00995, 2021.
[20] GAO S Y, ZHOU C L, MA C, et al. AiATrack: attention in attention for transformer visual tracking[J]. arXiv:2207.09603, 2022.
[21] CUI Y T, JIANG C, WANG L M, et al. MixFormer: end-to-end tracking with iterative mixed attention[J]. arXiv:2203.11082, 2022.
[22] GLOROT X, BORDES A, BENGIO Y, et al. Deep sparse rectifier neural networks[C]//Proceedings of the 14th Inter-national Conference on Artificial Intelligence and Statistics, Fort Lauderdale, Apr 11-13, 2011: 315-323.
[23] YAN B, ZHANG X Y, WANG D, et al. Alpha-refine: boosting tracking performance by precise bounding box estimation[C]//Proceedings of the 2021 IEEE Conference on Computer Vision and Pattern Recognition, Jun 19-25, 2021. Piscataway: IEEE, 2021: 5289-5298.
[24] REZATOFIGHI H, TSOI N, GWAK J, et al. Generalized intersection over union: a metric and a loss for bounding box regression[C]//Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 658-666.
[25] WU Y, LIM J, YANG M H. Online object tracking: a ben-chmark[C]//Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, Jun 23-28, 2013. Washington: IEEE Computer Society, 2013: 2411-2418.
[26] HUANG L, ZHAO X, HUANG K. GOT-10k: a large high diversity benchmark for generic object tracking in the wild[J]. IEEE Transactions on Pattern Analysis and Machine Intel-ligence, 2021, 43(5): 1562-1577.
[27] FAN H, LIN L, YANG F, et al. LaSOT: a high-quality bench-mark for large-scale single object tracking[C]//Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 5374-5383.
[28] LOSHCHILOV I, HUTTER F. Decoupled weight decay re-gularization[J]. arXiv:1711.05101, 2017.
[29] JIANG B R, LUO R X, MAO J Y, et al. Acquisition of lo-calization confidence for accurate object detection[C]//LNCS 11218: Proceedings of the 15th European Conference on Com-puter Vision, Munich, Sep 8-14, 2018. Cham: Springer, 2018: 816-832.
[30] WANG Q, ZHANG L, BERTINETTO L, et al. Fast online object tracking and segmentation: a unifying approach[C]// Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 1328-1338.
[31] DANELLJAN M, BHAT G, KHAN F S, et al. ATOM: ac-curate tracking by overlap maximization[C]//Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 4655-4664.
[32] XU Y D, WANG Z Y, LI Z X, et al. SiamFC++: towards robust and accurate visual tracking with target estimation guidelines[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence, the 32nd Innovative Applications of Artificial Intelligence Conference, the 10th AAAI Sym-posium on Educational Advances in Artificial Intelligence, New York, Feb 7-12, 2020. Menlo Park: AAAI, 2020: 12549-12556.
[33] ZHANG Z P, PENG H W, FU J L, et al. Ocean: object-aware anchor-free tracking[C]//LNCS 12366: Proceedings of the 16th European Conference on Computer Vision, Glasgow, Jun 16-18, 2020. Cham: Springer, 2020: 771-787. |