计算机科学与探索 ›› 2022, Vol. 16 ›› Issue (6): 1417-1428.DOI: 10.3778/j.issn.1673-9418.2011057
收稿日期:
2020-11-20
修回日期:
2021-02-05
出版日期:
2022-06-01
发布日期:
2021-03-08
通讯作者:
+ E-mail: 532338283@qq.com作者简介:
赵运基(1980—),男,河南南阳人,博士,讲师,主要研究方向为模式识别、智能控制。基金资助:
ZHAO Yunji, FAN Cunliang(), ZHANG Xinliang
Received:
2020-11-20
Revised:
2021-02-05
Online:
2022-06-01
Published:
2021-03-08
About author:
ZHAO Yunji, born in 1980, Ph.D., lecturer. His research interests include pattern recognition and intelligent control.Supported by:
摘要:
针对深度特征描述目标在跟踪过程中出现漂移或过拟合的问题,提出了一种融合多特征和通道感知的目标跟踪算法。应用预训练模型提取跟踪目标的深度特征,依据该特征构建相关滤波器并计算各通道对应滤波器的权重系数,根据权重系数对特征通道进行筛选;对保留的特征通过标准差计算生成统计特征并与原特征融合,采用融合后的特征构建相关滤波器并做相关运算,获取特征响应图确定目标的位置及尺度;利用跟踪结果区域的深度特征对融合特征构建的滤波器进行稀疏在线更新。所提算法和目前一些主流的跟踪算法在公共数据集OTB100、VOT2015和VOT2016上进行测试。与UDT相比,在不影响跟踪速度的同时,该算法具有更强的鲁棒性和更高的跟踪精度。实验结果表明,所提出的算法在目标尺度发生变化、快速运动和背景干扰等挑战下均表现出较强的鲁棒性。
中图分类号:
赵运基, 范存良, 张新良. 融合多特征和通道感知的目标跟踪算法[J]. 计算机科学与探索, 2022, 16(6): 1417-1428.
ZHAO Yunji, FAN Cunliang, ZHANG Xinliang. Object Tracking Algorithm with Fusion of Multi-feature and Channel Awareness[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(6): 1417-1428.
Update intervals | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
1 | 0.521 4 | 35.318 3 | 0.192 8 | 8.268 0 |
3 | 0.524 5 | 34.216 5 | 0.203 6 | 9.145 6 |
5 | 0.533 4 | 34.037 0 | 0.202 6 | 9.691 8 |
7 | 0.531 0 | 34.611 9 | 0.201 5 | 11.410 4 |
9 | 0.526 1 | 34.724 1 | 0.199 7 | 13.788 5 |
表1 VOT2015中不同更新间隔的跟踪性能
Table 1 Tracking performance of different update intervals on VOT2015
Update intervals | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
1 | 0.521 4 | 35.318 3 | 0.192 8 | 8.268 0 |
3 | 0.524 5 | 34.216 5 | 0.203 6 | 9.145 6 |
5 | 0.533 4 | 34.037 0 | 0.202 6 | 9.691 8 |
7 | 0.531 0 | 34.611 9 | 0.201 5 | 11.410 4 |
9 | 0.526 1 | 34.724 1 | 0.199 7 | 13.788 5 |
Reserved channels | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
8 | 0.522 5 | 39.217 0 | 0.181 3 | 12.719 0 |
16 | 0.539 5 | 33.137 6 | 0.206 4 | 11.481 0 |
20 | 0.528 4 | 31.452 6 | 0.207 4 | 10.901 6 |
24 | 0.531 3 | 35.177 8 | 0.200 3 | 10.532 0 |
28 | 0.536 3 | 33.665 0 | 0.209 2 | 9.830 0 |
32 | 0.533 4 | 34.037 0 | 0.202 6 | 9.691 8 |
表2 VOT2015中不同通道数目的性能分析
Table 2 Performance analysis of different channel number on VOT2015
Reserved channels | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
8 | 0.522 5 | 39.217 0 | 0.181 3 | 12.719 0 |
16 | 0.539 5 | 33.137 6 | 0.206 4 | 11.481 0 |
20 | 0.528 4 | 31.452 6 | 0.207 4 | 10.901 6 |
24 | 0.531 3 | 35.177 8 | 0.200 3 | 10.532 0 |
28 | 0.536 3 | 33.665 0 | 0.209 2 | 9.830 0 |
32 | 0.533 4 | 34.037 0 | 0.202 6 | 9.691 8 |
Features of fusion | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
No fusion | 0.539 5 | 33.137 6 | 0.206 4 | 11.481 0 |
Range | 0.520 7 | 30.358 6 | 0.207 1 | 11.339 6 |
Mean | 0.541 7 | 32.807 5 | 0.201 8 | 10.423 0 |
Standard deviation | 0.545 4 | 32.599 6 | 0.207 8 | 10.220 0 |
表3 VOT2015中不同融合方式的性能分析
Table 3 Performance analysis of different fusion modes on VOT2015
Features of fusion | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
No fusion | 0.539 5 | 33.137 6 | 0.206 4 | 11.481 0 |
Range | 0.520 7 | 30.358 6 | 0.207 1 | 11.339 6 |
Mean | 0.541 7 | 32.807 5 | 0.201 8 | 10.423 0 |
Standard deviation | 0.545 4 | 32.599 6 | 0.207 8 | 10.220 0 |
Update intervals | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
1 | 0.502 5 | 36.218 2 | 0.192 6 | 7.268 0 |
3 | 0.521 3 | 35.681 2 | 0.193 6 | 9.145 6 |
5 | 0.518 0 | 34.957 8 | 0.194 7 | 8.506 0 |
7 | 0.515 9 | 35.235 9 | 0.188 2 | 10.410 4 |
9 | 0.516 1 | 35.896 8 | 0.185 3 | 11.378 5 |
表4 VOT2016中不同更新间隔的跟踪性能
Table 4 Tracking performance of different update intervals on VOT2016
Update intervals | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
1 | 0.502 5 | 36.218 2 | 0.192 6 | 7.268 0 |
3 | 0.521 3 | 35.681 2 | 0.193 6 | 9.145 6 |
5 | 0.518 0 | 34.957 8 | 0.194 7 | 8.506 0 |
7 | 0.515 9 | 35.235 9 | 0.188 2 | 10.410 4 |
9 | 0.516 1 | 35.896 8 | 0.185 3 | 11.378 5 |
Reserved channels | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
8 | 0.484 4 | 39.522 9 | 0.175 5 | 12.827 8 |
16 | 0.519 5 | 34.097 2 | 0.203 0 | 11.348 8 |
20 | 0.506 4 | 35.913 9 | 0.198 3 | 11.278 8 |
24 | 0.507 1 | 31.595 0 | 0.205 9 | 10.868 0 |
28 | 0.519 1 | 34.370 3 | 0.200 9 | 10.605 2 |
32 | 0.518 0 | 34.957 8 | 0.194 7 | 8.506 0 |
表5 VOT2016中不同通道数目的性能分析
Table 5 Performance analysis of different channel number on VOT2016
Reserved channels | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
8 | 0.484 4 | 39.522 9 | 0.175 5 | 12.827 8 |
16 | 0.519 5 | 34.097 2 | 0.203 0 | 11.348 8 |
20 | 0.506 4 | 35.913 9 | 0.198 3 | 11.278 8 |
24 | 0.507 1 | 31.595 0 | 0.205 9 | 10.868 0 |
28 | 0.519 1 | 34.370 3 | 0.200 9 | 10.605 2 |
32 | 0.518 0 | 34.957 8 | 0.194 7 | 8.506 0 |
Features of fusion | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
No fusion | 0.519 5 | 34.097 2 | 0.203 0 | 11.348 8 |
Range | 0.518 1 | 30.468 1 | 0.213 0 | 10.437 8 |
Mean | 0.526 4 | 31.591 5 | 0.207 5 | 9.479 8 |
Standard deviation | 0.526 8 | 30.788 5 | 0.208 6 | 9.742 8 |
表6 VOT2016中不同融合方式的性能分析
Table 6 Performance analysis of different fusion modes on VOT2016
Features of fusion | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
No fusion | 0.519 5 | 34.097 2 | 0.203 0 | 11.348 8 |
Range | 0.518 1 | 30.468 1 | 0.213 0 | 10.437 8 |
Mean | 0.526 4 | 31.591 5 | 0.207 5 | 9.479 8 |
Standard deviation | 0.526 8 | 30.788 5 | 0.208 6 | 9.742 8 |
Algorithm | IV | SV | OCC | DEF | OPR | FM | MB | IPR | OV | BC | LR |
---|---|---|---|---|---|---|---|---|---|---|---|
KCF | 0.655 | 0.566 | 0.664 | 0.726 | 0.673 | 0.464 | 0.527 | 0.681 | 0.341 | 0.662 | 0.370 |
DSST | 0.730 | 0.738 | 0.743 | 0.711 | 0.764 | 0.513 | 0.544 | 0.768 | 0.511 | 0.694 | 0.497 |
SAMF | 0.682 | 0.723 | 0.828 | 0.790 | 0.755 | 0.608 | 0.564 | 0.714 | 0.636 | 0.676 | 0.525 |
SRDCF | 0.761 | 0.778 | 0.833 | 0.840 | 0.809 | 0.741 | 0.789 | 0.766 | 0.680 | 0.803 | 0.518 |
Staple | 0.741 | 0.733 | 0.829 | 0.883 | 0.804 | 0.643 | 0.688 | 0.773 | 0.679 | 0.753 | 0.550 |
ECOHC | 0.722 | 0.811 | 0.829 | 0.732 | 0.804 | 0.771 | 0.689 | 0.775 | 0.864 | 0.754 | 0.663 |
ECO | 0.864 | 0.907 | 0.876 | 0.813 | 0.875 | 0.821 | 0.814 | 0.828 | 0.859 | 0.880 | 0.755 |
LMCF | 0.783 | 0.775 | 0.833 | 0.855 | 0.817 | 0.730 | 0.714 | 0.779 | 0.695 | 0.848 | 0.555 |
UDT | 0.701 | 0.748 | 0.770 | 0.699 | 0.745 | 0.632 | 0.612 | 0.725 | 0.652 | 0.724 | 0.576 |
Ours | 0.739 | 0.776 | 0.852 | 0.823 | 0.828 | 0.728 | 0.725 | 0.788 | 0.766 | 0.789 | 0.655 |
表7 不同属性上各算法的距离精度对比
Table 7 Precision comparison of algorithms with different attributes
Algorithm | IV | SV | OCC | DEF | OPR | FM | MB | IPR | OV | BC | LR |
---|---|---|---|---|---|---|---|---|---|---|---|
KCF | 0.655 | 0.566 | 0.664 | 0.726 | 0.673 | 0.464 | 0.527 | 0.681 | 0.341 | 0.662 | 0.370 |
DSST | 0.730 | 0.738 | 0.743 | 0.711 | 0.764 | 0.513 | 0.544 | 0.768 | 0.511 | 0.694 | 0.497 |
SAMF | 0.682 | 0.723 | 0.828 | 0.790 | 0.755 | 0.608 | 0.564 | 0.714 | 0.636 | 0.676 | 0.525 |
SRDCF | 0.761 | 0.778 | 0.833 | 0.840 | 0.809 | 0.741 | 0.789 | 0.766 | 0.680 | 0.803 | 0.518 |
Staple | 0.741 | 0.733 | 0.829 | 0.883 | 0.804 | 0.643 | 0.688 | 0.773 | 0.679 | 0.753 | 0.550 |
ECOHC | 0.722 | 0.811 | 0.829 | 0.732 | 0.804 | 0.771 | 0.689 | 0.775 | 0.864 | 0.754 | 0.663 |
ECO | 0.864 | 0.907 | 0.876 | 0.813 | 0.875 | 0.821 | 0.814 | 0.828 | 0.859 | 0.880 | 0.755 |
LMCF | 0.783 | 0.775 | 0.833 | 0.855 | 0.817 | 0.730 | 0.714 | 0.779 | 0.695 | 0.848 | 0.555 |
UDT | 0.701 | 0.748 | 0.770 | 0.699 | 0.745 | 0.632 | 0.612 | 0.725 | 0.652 | 0.724 | 0.576 |
Ours | 0.739 | 0.776 | 0.852 | 0.823 | 0.828 | 0.728 | 0.725 | 0.788 | 0.766 | 0.789 | 0.655 |
Algorithm | IV | SV | OCC | DEF | OPR | FM | MB | IPR | OV | BC | LR |
---|---|---|---|---|---|---|---|---|---|---|---|
KCF | 0.528 | 0.373 | 0.530 | 0.631 | 0.552 | 0.399 | 0.452 | 0.559 | 0.324 | 0.576 | 0.338 |
DSST | 0.681 | 0.640 | 0.679 | 0.681 | 0.666 | 0.503 | 0.528 | 0.679 | 0.512 | 0.627 | 0.497 |
SAMF | 0.641 | 0.634 | 0.767 | 0.783 | 0.684 | 0.593 | 0.561 | 0.653 | 0.646 | 0.655 | 0.496 |
SRDCF | 0.701 | 0.712 | 0.776 | 0.784 | 0.726 | 0.711 | 0.762 | 0.709 | 0.702 | 0.715 | 0.526 |
Staple | 0.706 | 0.676 | 0.794 | 0.865 | 0.751 | 0.622 | 0.658 | 0.724 | 0.639 | 0.730 | 0.541 |
ECOHC | 0.678 | 0.738 | 0.759 | 0.723 | 0.718 | 0.724 | 0.680 | 0.686 | 0.858 | 0.701 | 0.655 |
ECO | 0.780 | 0.832 | 0.832 | 0.739 | 0.782 | 0.786 | 0.802 | 0.741 | 0.873 | 0.788 | 0.740 |
LMCF | 0.737 | 0.713 | 0.798 | 0.833 | 0.760 | 0.691 | 0.660 | 0.720 | 0.702 | 0.806 | 0.545 |
UDT | 0.675 | 0.752 | 0.748 | 0.679 | 0.726 | 0.646 | 0.628 | 0.713 | 0.653 | 0.697 | 0.561 |
Ours | 0.695 | 0.751 | 0.821 | 0.804 | 0.779 | 0.712 | 0.723 | 0.750 | 0.747 | 0.719 | 0.645 |
表8 不同属性上各算法的成功率对比
Table 8 Success rate comparison of algorithms with different attributes
Algorithm | IV | SV | OCC | DEF | OPR | FM | MB | IPR | OV | BC | LR |
---|---|---|---|---|---|---|---|---|---|---|---|
KCF | 0.528 | 0.373 | 0.530 | 0.631 | 0.552 | 0.399 | 0.452 | 0.559 | 0.324 | 0.576 | 0.338 |
DSST | 0.681 | 0.640 | 0.679 | 0.681 | 0.666 | 0.503 | 0.528 | 0.679 | 0.512 | 0.627 | 0.497 |
SAMF | 0.641 | 0.634 | 0.767 | 0.783 | 0.684 | 0.593 | 0.561 | 0.653 | 0.646 | 0.655 | 0.496 |
SRDCF | 0.701 | 0.712 | 0.776 | 0.784 | 0.726 | 0.711 | 0.762 | 0.709 | 0.702 | 0.715 | 0.526 |
Staple | 0.706 | 0.676 | 0.794 | 0.865 | 0.751 | 0.622 | 0.658 | 0.724 | 0.639 | 0.730 | 0.541 |
ECOHC | 0.678 | 0.738 | 0.759 | 0.723 | 0.718 | 0.724 | 0.680 | 0.686 | 0.858 | 0.701 | 0.655 |
ECO | 0.780 | 0.832 | 0.832 | 0.739 | 0.782 | 0.786 | 0.802 | 0.741 | 0.873 | 0.788 | 0.740 |
LMCF | 0.737 | 0.713 | 0.798 | 0.833 | 0.760 | 0.691 | 0.660 | 0.720 | 0.702 | 0.806 | 0.545 |
UDT | 0.675 | 0.752 | 0.748 | 0.679 | 0.726 | 0.646 | 0.628 | 0.713 | 0.653 | 0.697 | 0.561 |
Ours | 0.695 | 0.751 | 0.821 | 0.804 | 0.779 | 0.712 | 0.723 | 0.750 | 0.747 | 0.719 | 0.645 |
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 0.483 7 | 0.541 4 | 0.475 9 | 0.453 1 | 0.476 7 | 0.369 0 |
DSST | 0.549 3 | 0.604 9 | 0.655 8 | 0.485 5 | 0.376 9 | 0.512 5 |
SAMF | 0.523 0 | 0.575 5 | 0.598 8 | 0.479 1 | 0.466 4 | 0.434 1 |
SRDCF | 0.557 3 | 0.623 8 | 0.663 0 | 0.490 4 | 0.444 5 | 0.524 9 |
ECO | 0.546 1 | 0.536 5 | 0.634 2 | 0.495 1 | 0.418 7 | 0.512 3 |
UDT | 0.531 0 | 0.588 6 | 0.650 2 | 0.482 7 | 0.439 7 | 0.479 8 |
Ours | 0.556 2 | 0.604 9 | 0.707 3 | 0.478 3 | 0.460 2 | 0.516 9 |
表9 准确度
Table 9 Accuracy
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 0.483 7 | 0.541 4 | 0.475 9 | 0.453 1 | 0.476 7 | 0.369 0 |
DSST | 0.549 3 | 0.604 9 | 0.655 8 | 0.485 5 | 0.376 9 | 0.512 5 |
SAMF | 0.523 0 | 0.575 5 | 0.598 8 | 0.479 1 | 0.466 4 | 0.434 1 |
SRDCF | 0.557 3 | 0.623 8 | 0.663 0 | 0.490 4 | 0.444 5 | 0.524 9 |
ECO | 0.546 1 | 0.536 5 | 0.634 2 | 0.495 1 | 0.418 7 | 0.512 3 |
UDT | 0.531 0 | 0.588 6 | 0.650 2 | 0.482 7 | 0.439 7 | 0.479 8 |
Ours | 0.556 2 | 0.604 9 | 0.707 3 | 0.478 3 | 0.460 2 | 0.516 9 |
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 70.000 0 | 40.000 0 | 9.000 0 | 59.000 0 | 25.000 0 | 31.000 0 |
DSST | 69.000 0 | 37.000 0 | 7.000 0 | 61.000 0 | 28.000 0 | 33.000 0 |
SAMF | 51.000 0 | 32.000 0 | 7.000 0 | 48.000 0 | 24.000 0 | 27.000 0 |
SRDCF | 25.000 0 | 9.000 0 | 0.000 0 | 23.000 0 | 26.000 0 | 8.000 0 |
ECO | 35.000 0 | 14.000 0 | 2.000 0 | 29.000 0 | 26.000 0 | 14.000 0 |
UDT | 45.000 0 | 27.000 0 | 9.000 0 | 44.000 0 | 16.000 0 | 32.000 0 |
Ours | 42.000 0 | 25.000 0 | 8.000 0 | 45.000 0 | 19.000 0 | 27.000 0 |
表10 鲁棒性
Table 10 Robustness
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 70.000 0 | 40.000 0 | 9.000 0 | 59.000 0 | 25.000 0 | 31.000 0 |
DSST | 69.000 0 | 37.000 0 | 7.000 0 | 61.000 0 | 28.000 0 | 33.000 0 |
SAMF | 51.000 0 | 32.000 0 | 7.000 0 | 48.000 0 | 24.000 0 | 27.000 0 |
SRDCF | 25.000 0 | 9.000 0 | 0.000 0 | 23.000 0 | 26.000 0 | 8.000 0 |
ECO | 35.000 0 | 14.000 0 | 2.000 0 | 29.000 0 | 26.000 0 | 14.000 0 |
UDT | 45.000 0 | 27.000 0 | 9.000 0 | 44.000 0 | 16.000 0 | 32.000 0 |
Ours | 42.000 0 | 25.000 0 | 8.000 0 | 45.000 0 | 19.000 0 | 27.000 0 |
Algorithm | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
KCF | 0.472 1 | 48.040 1 | 0.170 7 | 25.451 8 |
DSST | 0.535 1 | 47.876 2 | 0.170 0 | 4.468 4 |
SAMF | 0.511 0 | 37.833 1 | 0.199 3 | 3.047 4 |
SRDCF | 0.551 0 | 16.952 5 | 0.315 9 | 0.374 6 |
ECO | 0.521 6 | 23.220 2 | 0.247 3 | 0.848 6 |
UDT | 0.521 4 | 35.318 3 | 0.192 8 | 8.268 0 |
Ours | 0.545 4 | 32.599 6 | 0.207 8 | 10.220 0 |
表11 整体性能
Table 11 Overall performance
Algorithm | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
KCF | 0.472 1 | 48.040 1 | 0.170 7 | 25.451 8 |
DSST | 0.535 1 | 47.876 2 | 0.170 0 | 4.468 4 |
SAMF | 0.511 0 | 37.833 1 | 0.199 3 | 3.047 4 |
SRDCF | 0.551 0 | 16.952 5 | 0.315 9 | 0.374 6 |
ECO | 0.521 6 | 23.220 2 | 0.247 3 | 0.848 6 |
UDT | 0.521 4 | 35.318 3 | 0.192 8 | 8.268 0 |
Ours | 0.545 4 | 32.599 6 | 0.207 8 | 10.220 0 |
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 0.483 7 | 0.541 4 | 0.475 9 | 0.453 1 | 0.476 7 | 0.369 0 |
DSST | 0.549 3 | 0.604 9 | 0.655 8 | 0.485 5 | 0.376 9 | 0.512 5 |
SAMF | 0.523 0 | 0.575 5 | 0.598 8 | 0.479 1 | 0.466 4 | 0.434 1 |
SRDCF | 0.532 8 | 0.604 0 | 0.646 9 | 0.472 5 | 0.425 1 | 0.505 1 |
ECO | 0.519 5 | 0.560 2 | 0.623 9 | 0.502 1 | 0.389 4 | 0.501 8 |
UDT | 0.533 0 | 0.528 9 | 0.677 3 | 0.482 9 | 0.406 2 | 0.488 6 |
Ours | 0.540 7 | 0.557 4 | 0.689 0 | 0.485 1 | 0.448 9 | 0.505 8 |
表12 准确度
Table 12 Accuracy
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 0.483 7 | 0.541 4 | 0.475 9 | 0.453 1 | 0.476 7 | 0.369 0 |
DSST | 0.549 3 | 0.604 9 | 0.655 8 | 0.485 5 | 0.376 9 | 0.512 5 |
SAMF | 0.523 0 | 0.575 5 | 0.598 8 | 0.479 1 | 0.466 4 | 0.434 1 |
SRDCF | 0.532 8 | 0.604 0 | 0.646 9 | 0.472 5 | 0.425 1 | 0.505 1 |
ECO | 0.519 5 | 0.560 2 | 0.623 9 | 0.502 1 | 0.389 4 | 0.501 8 |
UDT | 0.533 0 | 0.528 9 | 0.677 3 | 0.482 9 | 0.406 2 | 0.488 6 |
Ours | 0.540 7 | 0.557 4 | 0.689 0 | 0.485 1 | 0.448 9 | 0.505 8 |
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 70.000 0 | 40.000 0 | 9.000 0 | 59.000 0 | 25.000 0 | 31.000 0 |
DSST | 69.000 0 | 37.000 0 | 7.000 0 | 61.000 0 | 28.000 0 | 33.000 0 |
SAMF | 51.000 0 | 32.000 0 | 7.000 0 | 48.000 0 | 24.000 0 | 27.000 0 |
SRDCF | 27.000 0 | 11.000 0 | 2.000 0 | 23.000 0 | 26.000 0 | 14.000 0 |
ECO | 28.000 0 | 12.000 0 | 1.000 0 | 25.000 0 | 28.000 0 | 13.000 0 |
UDT | 48.000 0 | 27.000 0 | 12.000 0 | 45.000 0 | 20.000 0 | 32.000 0 |
Ours | 38.000 0 | 22.000 0 | 8.000 0 | 46.000 0 | 18.000 0 | 27.000 0 |
表13 鲁棒性
Table 13 Robustness
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 70.000 0 | 40.000 0 | 9.000 0 | 59.000 0 | 25.000 0 | 31.000 0 |
DSST | 69.000 0 | 37.000 0 | 7.000 0 | 61.000 0 | 28.000 0 | 33.000 0 |
SAMF | 51.000 0 | 32.000 0 | 7.000 0 | 48.000 0 | 24.000 0 | 27.000 0 |
SRDCF | 27.000 0 | 11.000 0 | 2.000 0 | 23.000 0 | 26.000 0 | 14.000 0 |
ECO | 28.000 0 | 12.000 0 | 1.000 0 | 25.000 0 | 28.000 0 | 13.000 0 |
UDT | 48.000 0 | 27.000 0 | 12.000 0 | 45.000 0 | 20.000 0 | 32.000 0 |
Ours | 38.000 0 | 22.000 0 | 8.000 0 | 46.000 0 | 18.000 0 | 27.000 0 |
Algorithm | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
KCF | 0.491 6 | 38.082 0 | 0.193 5 | 22.253 5 |
DSST | 0.524 5 | 44.813 8 | 0.180 5 | 9.714 1 |
SAMF | 0.511 0 | 37.833 1 | 0.199 3 | 3.649 2 |
SRDCF | 0.529 6 | 18.926 6 | 0.295 5 | 0.405 4 |
ECO | 0.516 4 | 19.974 1 | 0.260 4 | 0.871 8 |
UDT | 0.502 5 | 36.218 2 | 0.192 6 | 7.268 0 |
Ours | 0.526 8 | 30.788 5 | 0.207 5 | 9.742 8 |
表14 整体性能
Table 14 Overall performance
Algorithm | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
KCF | 0.491 6 | 38.082 0 | 0.193 5 | 22.253 5 |
DSST | 0.524 5 | 44.813 8 | 0.180 5 | 9.714 1 |
SAMF | 0.511 0 | 37.833 1 | 0.199 3 | 3.649 2 |
SRDCF | 0.529 6 | 18.926 6 | 0.295 5 | 0.405 4 |
ECO | 0.516 4 | 19.974 1 | 0.260 4 | 0.871 8 |
UDT | 0.502 5 | 36.218 2 | 0.192 6 | 7.268 0 |
Ours | 0.526 8 | 30.788 5 | 0.207 5 | 9.742 8 |
[1] | ZHANG T Z, XU C S, YANG M H. Multi-task correlation particle filter for robust object tracking[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 4819-4827. |
[2] | 张晶, 王旭, 范洪博. 时空上下文相似性的TLD目标跟踪算法[J]. 计算机科学与探索, 2018, 12(7): 1169-1181. |
ZHANG J, WANG X, FAN H B. TLD object tracking algorithm based on spatio-temporal context similarity[J]. Journal of Frontiers of Computer Science and Technology, 2018, 12(7): 1169-1181. | |
[3] | 陈晨, 邓赵红, 高艳丽, 等. 多模糊核融合的单目标跟踪算法[J]. 计算机科学与探索, 2020, 14(5): 848-860. |
CHEN C, DENG Z H, GAO Y L, et al. Single target tracking algorithm based on multi-fuzzy kernel fusion[J]. Journal of Frontiers of Computer Science and Technology, 2020, 14(5): 848-860. | |
[4] | CHOI J W, CHANG H J, YUN S, et al. Attentional corre-lation filter network for adaptive visual tracking[C]// Procee-dings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Was-hington: IEEE Computer Society, 2017: 4828-4837. |
[5] | 刘芳, 黄光伟, 路丽霞, 等. 自适应模板更新的鲁棒目标跟踪算法[J]. 计算机科学与探索, 2019, 13(1): 83-96. |
LIU F, HUANG G W, LU L X, et al. Robust target tracking algorithm for adaptive template updating[J]. Journal of Fron-tiers of Computer Science and Technology, 2019, 13(1): 83-96. | |
[6] | JIA X, LU H C, YANG M H. Visual tracking via adaptive structural local sparse appearance model[C]// Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, Jun 16-21, 2012. Washington: IEEE Computer Society, 2012: 1822-1829. |
[7] |
HARE S, GOLODETZ S, SAFFARI A, et al. Struck: struc-tured output tracking with kernels[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(10): 2096-2109.
DOI URL |
[8] | DANELLJAN M, KHAN F S, FELSBERG M, et al. Adap-tive color attributes for real-time visual tracking[C]// Procee-dings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Jun 23-28, 2014. Was-hington: IEEE Computer Society, 2014: 1090-1097. |
[9] | DALAL N, TRIGGS B. Histograms of oriented gradients for human detection[C]// Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recogni-tion, San Diego, Jun 20-25, 2005. Washington: IEEE Computer Society, 2005: 886-893. |
[10] | 孟琭, 杨旭. 目标跟踪算法综述[J]. 自动化学报, 2019, 45(7): 1244-1260. |
MENG L, YANG X. A survey of object tracking algorithms[J]. Acta Automatica Sinica, 2019, 45(7): 1244-1260. | |
[11] | LI F, TIAN C, ZUO W M, et al. Learning spatial-temporal regularized correlation filters for visual tracking[C]// Procee-dings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Piscataway: IEEE, 2018: 4904-4913. |
[12] | KRISTAN M, LEONARDIS A, MATAS J, et al. The sixth visual object tracking VOT2018 challenge results[C]// LNCS 11129: Proceedings of the 15th European Conference on Computer Vision, Munich, Sep 8-14, 2018. Cham: Springer, 2018: 3-53. |
[13] | KRISTAN M, BERG A, ZHENG L, et al. The seventh visual object tracking VOT 2019 challenge results[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop, Seoul, Oct 27-28, 2019. Piscataway: IEEE, 2019: 2206-2241. |
[14] | LI B, YAN J J, WU W, et al. High performance visual tracking with siamese region proposal network[C]// Procee-dings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Piscataway: IEEE, 2018: 8971-8980. |
[15] | CHOI J, CHANG H J, FISCHER T, et al. Context-aware deep feature compression for high-speed visual tracking[C]// Proceedings of the 2018 IEEE/CVF Conference on Com-puter Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Piscataway: IEEE, 2018: 479-488. |
[16] | VALMADRE J, BERTINETTO L, HENRIQUES J F, et al. End-to-end representation learning for correlation filter based tracking[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 5000-5008. |
[17] | SUN C, WANG D, LU H C, et al. Correlation tracking via joint discrimination and reliability learning[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Piscataway: IEEE, 2018: 489-497. |
[18] | WANG N, ZHOU W G, TIAN Q, et al. Multi-cue correlation filters for robust visual tracking[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Re-cognition, Salt Lake City, Jun 18-23, 2018. Piscataway: IEEE, 2018: 4844-4853. |
[19] | LUKEZIC A, VOJIR T, ZAJC L C, et al. Discriminative correlation filter with channel and spatial reliability[C]// Pro-ceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washin-gton: IEEE Computer Society, 2017: 4847-4856. |
[20] | LI X, MA C, WU B Y, et al. Target-aware deep tracking[C]// Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 1369-1378. |
[21] | BOLME D S, BEVERIDGE J R, DRAPER B A, et al. Visual object tracking using adaptive correlation filters[C]// Proceedings of the 23rd IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, Jun 13-18, 2010. Washington: IEEE Computer Society, 2010: 2544-2550. |
[22] |
HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583-596.
DOI URL |
[23] |
DANELLJAN M, HÄGER G, KHAN F S, et al. Discrimina-tive scale space tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(8): 1561-1575.
DOI URL |
[24] | WU Y, LIM J, YANG M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelli-gence, 2015, 37(9): 1834-1848. |
[25] | KRISTAN M, MATAS J, LEONARDIS A, et al. The visual object tracking VOT 2015 challenge results[C]// Proceedings of the 2015 IEEE International Conference on Computer Vision Workshop, Santiago, Dec 7-13, 2015. Washington: IEEE Computer Society, 2015: 564-586. |
[26] | KRISTAN M, LEONARDIS A, MATAS J, et al. The visual object tracking VOT2016 challenge results[C]// LNCS 9914: Proceedings of the 14th European Conference on Computer Vision, Amsterdam, Oct 8-10, 15-16, 2016. Cham: Springer, 2016: 777-823. |
[27] | WANG N, SONG Y B, MA C, et al. Unsupervised deep tracking[C]// Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 15-20, 2019. Piscataway: IEEE, 2019: 1308-1317. |
[28] |
RUSSAKOVSKY O, DENG J, SU H, et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211-252.
DOI URL |
[29] | SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv:1409. 1556, 2014. |
[30] | VEDALDI A, LENC K. MatConvNet: convolutional neural networks for MATLAB[C]// Proceedings of the 23rd Annual ACM Conference on Multimedia Conference, Brisbane, Oct 26-30, 2015. New York: ACM, 2015: 689-692. |
[31] | WU Y, LIM J, YANG M H. Online object tracking: a bench-mark[C]// Proceedings of the 2013 IEEE Conference on Com-puter Vision and Pattern Recognition, Portland, Jun 23-28, 2013. Washington: IEEE Computer Society, 2013: 2411-2418. |
[32] | DANELLJAN M, BHAT G, KHAN F S, et al. ECO: effi-cient convolution operators for tracking[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 6931-6939. |
[33] | WANG M M, LIU Y, HUANG Z Y. Large margin object tracking with circulant feature maps[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Reco-gnition, Honolulu, Jul 21-26, 2017. Washington: IEEE Com-puter Society, 2017: 4800-4808. |
[34] | DANELLJAN M, HÄGER G, KHAN F S, et al. Learning spatially regularized correlation filters for visual tracking[C]// Proceedings of the 2015 IEEE International Conference on Computer Vision, Dec 7-13, 2015. Washington: IEEE Com-puter Society, 2015: 4310-4318. |
[35] | BERTINETTO L, VALMADRE J, GOLODETZ S, et al. Staple: complementary learners for real-time tracking[C]// Proceedings of the 2016 IEEE Conference on Computer Vi-sion and Pattern Recognition, Las Vegas, Jun 27-30, 2016. Washington: IEEE Computer Society, 2016: 1401-1409. |
[36] | LI Y, ZHU J K. A scale adaptive kernel correlation filter tracker with feature integration[C]// LNCS 8926: Proceedings of the 13th European Conference on Computer Vision, Zurich, Sep 6-7, 12, 2014. Cham: Springer, 2014: 254-265. |
[1] | 杨政, 邓赵红, 罗晓清, 顾鑫, 王士同. 利用ELM-AE和迁移表征学习构建的目标跟踪系统[J]. 计算机科学与探索, 2022, 16(7): 1633-1648. |
[2] | 彭豪, 李晓明. 多尺度选择金字塔网络的小样本目标检测算法[J]. 计算机科学与探索, 2022, 16(7): 1649-1660. |
[3] | 刘艺, 李蒙蒙, 郑奇斌, 秦伟, 任小广. 视频目标跟踪算法综述[J]. 计算机科学与探索, 2022, 16(7): 1504-1515. |
[4] | 赵小明, 杨轶娇, 张石清. 面向深度学习的多模态情感识别研究进展[J]. 计算机科学与探索, 2022, 16(7): 1479-1503. |
[5] | 李运寰, 闻继伟, 彭力. 高帧率的轻量级孪生网络目标跟踪[J]. 计算机科学与探索, 2022, 16(6): 1405-1416. |
[6] | 程卫月, 张雪琴, 林克正, 李骜. 融合全局与局部特征的深度卷积神经网络算法[J]. 计算机科学与探索, 2022, 16(5): 1146-1154. |
[7] | 包广斌, 李港乐, 王国雄. 面向多模态情感分析的双模态交互注意力[J]. 计算机科学与探索, 2022, 16(4): 909-916. |
[8] | 赵鹏飞, 谢林柏, 彭力. 融合注意力机制的深层次小目标检测算法[J]. 计算机科学与探索, 2022, 16(4): 927-937. |
[9] | 程世龙, 谢林柏, 彭力. 梯度导向的通道选择目标跟踪算法[J]. 计算机科学与探索, 2022, 16(3): 649-660. |
[10] | 王燕妮, 余丽仙. 注意力与多尺度有效融合的SSD目标检测算法[J]. 计算机科学与探索, 2022, 16(2): 438-447. |
[11] | 钱伍, 王国中, 李国平. 改进YOLOv5的交通灯实时检测鲁棒算法[J]. 计算机科学与探索, 2022, 16(1): 231-241. |
[12] | 李志欣, 陈圣嘉, 周韬, 马慧芳. 协同级联网络和对抗网络的目标检测[J]. 计算机科学与探索, 2022, 16(1): 217-230. |
[13] | 陈璠, 彭力. 多层级重叠条纹特征融合的行人重识别[J]. 计算机科学与探索, 2021, 15(9): 1753-1761. |
[14] | 赵小强, 徐慧萍. 分级特征融合的图像语义分割[J]. 计算机科学与探索, 2021, 15(5): 949-957. |
[15] | 张晶, 黄浩淼. 结合重检测机制的多卷积层特征响应跟踪算法[J]. 计算机科学与探索, 2021, 15(3): 533-544. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||