Journal of Frontiers of Computer Science and Technology ›› 2022, Vol. 16 ›› Issue (6): 1417-1428.DOI: 10.3778/j.issn.1673-9418.2011057
• Graphics and Image • Previous Articles Next Articles
ZHAO Yunji, FAN Cunliang(), ZHANG Xinliang
Received:
2020-11-20
Revised:
2021-02-05
Online:
2022-06-01
Published:
2021-03-08
About author:
ZHAO Yunji, born in 1980, Ph.D., lecturer. His research interests include pattern recognition and intelligent control.Supported by:
通讯作者:
+ E-mail: 532338283@qq.com作者简介:
赵运基(1980—),男,河南南阳人,博士,讲师,主要研究方向为模式识别、智能控制。基金资助:
CLC Number:
ZHAO Yunji, FAN Cunliang, ZHANG Xinliang. Object Tracking Algorithm with Fusion of Multi-feature and Channel Awareness[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(6): 1417-1428.
赵运基, 范存良, 张新良. 融合多特征和通道感知的目标跟踪算法[J]. 计算机科学与探索, 2022, 16(6): 1417-1428.
Add to citation manager EndNote|Ris|BibTeX
URL: http://fcst.ceaj.org/EN/10.3778/j.issn.1673-9418.2011057
Update intervals | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
1 | 0.521 4 | 35.318 3 | 0.192 8 | 8.268 0 |
3 | 0.524 5 | 34.216 5 | 0.203 6 | 9.145 6 |
5 | 0.533 4 | 34.037 0 | 0.202 6 | 9.691 8 |
7 | 0.531 0 | 34.611 9 | 0.201 5 | 11.410 4 |
9 | 0.526 1 | 34.724 1 | 0.199 7 | 13.788 5 |
Table 1 Tracking performance of different update intervals on VOT2015
Update intervals | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
1 | 0.521 4 | 35.318 3 | 0.192 8 | 8.268 0 |
3 | 0.524 5 | 34.216 5 | 0.203 6 | 9.145 6 |
5 | 0.533 4 | 34.037 0 | 0.202 6 | 9.691 8 |
7 | 0.531 0 | 34.611 9 | 0.201 5 | 11.410 4 |
9 | 0.526 1 | 34.724 1 | 0.199 7 | 13.788 5 |
Reserved channels | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
8 | 0.522 5 | 39.217 0 | 0.181 3 | 12.719 0 |
16 | 0.539 5 | 33.137 6 | 0.206 4 | 11.481 0 |
20 | 0.528 4 | 31.452 6 | 0.207 4 | 10.901 6 |
24 | 0.531 3 | 35.177 8 | 0.200 3 | 10.532 0 |
28 | 0.536 3 | 33.665 0 | 0.209 2 | 9.830 0 |
32 | 0.533 4 | 34.037 0 | 0.202 6 | 9.691 8 |
Table 2 Performance analysis of different channel number on VOT2015
Reserved channels | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
8 | 0.522 5 | 39.217 0 | 0.181 3 | 12.719 0 |
16 | 0.539 5 | 33.137 6 | 0.206 4 | 11.481 0 |
20 | 0.528 4 | 31.452 6 | 0.207 4 | 10.901 6 |
24 | 0.531 3 | 35.177 8 | 0.200 3 | 10.532 0 |
28 | 0.536 3 | 33.665 0 | 0.209 2 | 9.830 0 |
32 | 0.533 4 | 34.037 0 | 0.202 6 | 9.691 8 |
Features of fusion | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
No fusion | 0.539 5 | 33.137 6 | 0.206 4 | 11.481 0 |
Range | 0.520 7 | 30.358 6 | 0.207 1 | 11.339 6 |
Mean | 0.541 7 | 32.807 5 | 0.201 8 | 10.423 0 |
Standard deviation | 0.545 4 | 32.599 6 | 0.207 8 | 10.220 0 |
Table 3 Performance analysis of different fusion modes on VOT2015
Features of fusion | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
No fusion | 0.539 5 | 33.137 6 | 0.206 4 | 11.481 0 |
Range | 0.520 7 | 30.358 6 | 0.207 1 | 11.339 6 |
Mean | 0.541 7 | 32.807 5 | 0.201 8 | 10.423 0 |
Standard deviation | 0.545 4 | 32.599 6 | 0.207 8 | 10.220 0 |
Update intervals | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
1 | 0.502 5 | 36.218 2 | 0.192 6 | 7.268 0 |
3 | 0.521 3 | 35.681 2 | 0.193 6 | 9.145 6 |
5 | 0.518 0 | 34.957 8 | 0.194 7 | 8.506 0 |
7 | 0.515 9 | 35.235 9 | 0.188 2 | 10.410 4 |
9 | 0.516 1 | 35.896 8 | 0.185 3 | 11.378 5 |
Table 4 Tracking performance of different update intervals on VOT2016
Update intervals | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
1 | 0.502 5 | 36.218 2 | 0.192 6 | 7.268 0 |
3 | 0.521 3 | 35.681 2 | 0.193 6 | 9.145 6 |
5 | 0.518 0 | 34.957 8 | 0.194 7 | 8.506 0 |
7 | 0.515 9 | 35.235 9 | 0.188 2 | 10.410 4 |
9 | 0.516 1 | 35.896 8 | 0.185 3 | 11.378 5 |
Reserved channels | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
8 | 0.484 4 | 39.522 9 | 0.175 5 | 12.827 8 |
16 | 0.519 5 | 34.097 2 | 0.203 0 | 11.348 8 |
20 | 0.506 4 | 35.913 9 | 0.198 3 | 11.278 8 |
24 | 0.507 1 | 31.595 0 | 0.205 9 | 10.868 0 |
28 | 0.519 1 | 34.370 3 | 0.200 9 | 10.605 2 |
32 | 0.518 0 | 34.957 8 | 0.194 7 | 8.506 0 |
Table 5 Performance analysis of different channel number on VOT2016
Reserved channels | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
8 | 0.484 4 | 39.522 9 | 0.175 5 | 12.827 8 |
16 | 0.519 5 | 34.097 2 | 0.203 0 | 11.348 8 |
20 | 0.506 4 | 35.913 9 | 0.198 3 | 11.278 8 |
24 | 0.507 1 | 31.595 0 | 0.205 9 | 10.868 0 |
28 | 0.519 1 | 34.370 3 | 0.200 9 | 10.605 2 |
32 | 0.518 0 | 34.957 8 | 0.194 7 | 8.506 0 |
Features of fusion | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
No fusion | 0.519 5 | 34.097 2 | 0.203 0 | 11.348 8 |
Range | 0.518 1 | 30.468 1 | 0.213 0 | 10.437 8 |
Mean | 0.526 4 | 31.591 5 | 0.207 5 | 9.479 8 |
Standard deviation | 0.526 8 | 30.788 5 | 0.208 6 | 9.742 8 |
Table 6 Performance analysis of different fusion modes on VOT2016
Features of fusion | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
No fusion | 0.519 5 | 34.097 2 | 0.203 0 | 11.348 8 |
Range | 0.518 1 | 30.468 1 | 0.213 0 | 10.437 8 |
Mean | 0.526 4 | 31.591 5 | 0.207 5 | 9.479 8 |
Standard deviation | 0.526 8 | 30.788 5 | 0.208 6 | 9.742 8 |
Algorithm | IV | SV | OCC | DEF | OPR | FM | MB | IPR | OV | BC | LR |
---|---|---|---|---|---|---|---|---|---|---|---|
KCF | 0.655 | 0.566 | 0.664 | 0.726 | 0.673 | 0.464 | 0.527 | 0.681 | 0.341 | 0.662 | 0.370 |
DSST | 0.730 | 0.738 | 0.743 | 0.711 | 0.764 | 0.513 | 0.544 | 0.768 | 0.511 | 0.694 | 0.497 |
SAMF | 0.682 | 0.723 | 0.828 | 0.790 | 0.755 | 0.608 | 0.564 | 0.714 | 0.636 | 0.676 | 0.525 |
SRDCF | 0.761 | 0.778 | 0.833 | 0.840 | 0.809 | 0.741 | 0.789 | 0.766 | 0.680 | 0.803 | 0.518 |
Staple | 0.741 | 0.733 | 0.829 | 0.883 | 0.804 | 0.643 | 0.688 | 0.773 | 0.679 | 0.753 | 0.550 |
ECOHC | 0.722 | 0.811 | 0.829 | 0.732 | 0.804 | 0.771 | 0.689 | 0.775 | 0.864 | 0.754 | 0.663 |
ECO | 0.864 | 0.907 | 0.876 | 0.813 | 0.875 | 0.821 | 0.814 | 0.828 | 0.859 | 0.880 | 0.755 |
LMCF | 0.783 | 0.775 | 0.833 | 0.855 | 0.817 | 0.730 | 0.714 | 0.779 | 0.695 | 0.848 | 0.555 |
UDT | 0.701 | 0.748 | 0.770 | 0.699 | 0.745 | 0.632 | 0.612 | 0.725 | 0.652 | 0.724 | 0.576 |
Ours | 0.739 | 0.776 | 0.852 | 0.823 | 0.828 | 0.728 | 0.725 | 0.788 | 0.766 | 0.789 | 0.655 |
Table 7 Precision comparison of algorithms with different attributes
Algorithm | IV | SV | OCC | DEF | OPR | FM | MB | IPR | OV | BC | LR |
---|---|---|---|---|---|---|---|---|---|---|---|
KCF | 0.655 | 0.566 | 0.664 | 0.726 | 0.673 | 0.464 | 0.527 | 0.681 | 0.341 | 0.662 | 0.370 |
DSST | 0.730 | 0.738 | 0.743 | 0.711 | 0.764 | 0.513 | 0.544 | 0.768 | 0.511 | 0.694 | 0.497 |
SAMF | 0.682 | 0.723 | 0.828 | 0.790 | 0.755 | 0.608 | 0.564 | 0.714 | 0.636 | 0.676 | 0.525 |
SRDCF | 0.761 | 0.778 | 0.833 | 0.840 | 0.809 | 0.741 | 0.789 | 0.766 | 0.680 | 0.803 | 0.518 |
Staple | 0.741 | 0.733 | 0.829 | 0.883 | 0.804 | 0.643 | 0.688 | 0.773 | 0.679 | 0.753 | 0.550 |
ECOHC | 0.722 | 0.811 | 0.829 | 0.732 | 0.804 | 0.771 | 0.689 | 0.775 | 0.864 | 0.754 | 0.663 |
ECO | 0.864 | 0.907 | 0.876 | 0.813 | 0.875 | 0.821 | 0.814 | 0.828 | 0.859 | 0.880 | 0.755 |
LMCF | 0.783 | 0.775 | 0.833 | 0.855 | 0.817 | 0.730 | 0.714 | 0.779 | 0.695 | 0.848 | 0.555 |
UDT | 0.701 | 0.748 | 0.770 | 0.699 | 0.745 | 0.632 | 0.612 | 0.725 | 0.652 | 0.724 | 0.576 |
Ours | 0.739 | 0.776 | 0.852 | 0.823 | 0.828 | 0.728 | 0.725 | 0.788 | 0.766 | 0.789 | 0.655 |
Algorithm | IV | SV | OCC | DEF | OPR | FM | MB | IPR | OV | BC | LR |
---|---|---|---|---|---|---|---|---|---|---|---|
KCF | 0.528 | 0.373 | 0.530 | 0.631 | 0.552 | 0.399 | 0.452 | 0.559 | 0.324 | 0.576 | 0.338 |
DSST | 0.681 | 0.640 | 0.679 | 0.681 | 0.666 | 0.503 | 0.528 | 0.679 | 0.512 | 0.627 | 0.497 |
SAMF | 0.641 | 0.634 | 0.767 | 0.783 | 0.684 | 0.593 | 0.561 | 0.653 | 0.646 | 0.655 | 0.496 |
SRDCF | 0.701 | 0.712 | 0.776 | 0.784 | 0.726 | 0.711 | 0.762 | 0.709 | 0.702 | 0.715 | 0.526 |
Staple | 0.706 | 0.676 | 0.794 | 0.865 | 0.751 | 0.622 | 0.658 | 0.724 | 0.639 | 0.730 | 0.541 |
ECOHC | 0.678 | 0.738 | 0.759 | 0.723 | 0.718 | 0.724 | 0.680 | 0.686 | 0.858 | 0.701 | 0.655 |
ECO | 0.780 | 0.832 | 0.832 | 0.739 | 0.782 | 0.786 | 0.802 | 0.741 | 0.873 | 0.788 | 0.740 |
LMCF | 0.737 | 0.713 | 0.798 | 0.833 | 0.760 | 0.691 | 0.660 | 0.720 | 0.702 | 0.806 | 0.545 |
UDT | 0.675 | 0.752 | 0.748 | 0.679 | 0.726 | 0.646 | 0.628 | 0.713 | 0.653 | 0.697 | 0.561 |
Ours | 0.695 | 0.751 | 0.821 | 0.804 | 0.779 | 0.712 | 0.723 | 0.750 | 0.747 | 0.719 | 0.645 |
Table 8 Success rate comparison of algorithms with different attributes
Algorithm | IV | SV | OCC | DEF | OPR | FM | MB | IPR | OV | BC | LR |
---|---|---|---|---|---|---|---|---|---|---|---|
KCF | 0.528 | 0.373 | 0.530 | 0.631 | 0.552 | 0.399 | 0.452 | 0.559 | 0.324 | 0.576 | 0.338 |
DSST | 0.681 | 0.640 | 0.679 | 0.681 | 0.666 | 0.503 | 0.528 | 0.679 | 0.512 | 0.627 | 0.497 |
SAMF | 0.641 | 0.634 | 0.767 | 0.783 | 0.684 | 0.593 | 0.561 | 0.653 | 0.646 | 0.655 | 0.496 |
SRDCF | 0.701 | 0.712 | 0.776 | 0.784 | 0.726 | 0.711 | 0.762 | 0.709 | 0.702 | 0.715 | 0.526 |
Staple | 0.706 | 0.676 | 0.794 | 0.865 | 0.751 | 0.622 | 0.658 | 0.724 | 0.639 | 0.730 | 0.541 |
ECOHC | 0.678 | 0.738 | 0.759 | 0.723 | 0.718 | 0.724 | 0.680 | 0.686 | 0.858 | 0.701 | 0.655 |
ECO | 0.780 | 0.832 | 0.832 | 0.739 | 0.782 | 0.786 | 0.802 | 0.741 | 0.873 | 0.788 | 0.740 |
LMCF | 0.737 | 0.713 | 0.798 | 0.833 | 0.760 | 0.691 | 0.660 | 0.720 | 0.702 | 0.806 | 0.545 |
UDT | 0.675 | 0.752 | 0.748 | 0.679 | 0.726 | 0.646 | 0.628 | 0.713 | 0.653 | 0.697 | 0.561 |
Ours | 0.695 | 0.751 | 0.821 | 0.804 | 0.779 | 0.712 | 0.723 | 0.750 | 0.747 | 0.719 | 0.645 |
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 0.483 7 | 0.541 4 | 0.475 9 | 0.453 1 | 0.476 7 | 0.369 0 |
DSST | 0.549 3 | 0.604 9 | 0.655 8 | 0.485 5 | 0.376 9 | 0.512 5 |
SAMF | 0.523 0 | 0.575 5 | 0.598 8 | 0.479 1 | 0.466 4 | 0.434 1 |
SRDCF | 0.557 3 | 0.623 8 | 0.663 0 | 0.490 4 | 0.444 5 | 0.524 9 |
ECO | 0.546 1 | 0.536 5 | 0.634 2 | 0.495 1 | 0.418 7 | 0.512 3 |
UDT | 0.531 0 | 0.588 6 | 0.650 2 | 0.482 7 | 0.439 7 | 0.479 8 |
Ours | 0.556 2 | 0.604 9 | 0.707 3 | 0.478 3 | 0.460 2 | 0.516 9 |
Table 9 Accuracy
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 0.483 7 | 0.541 4 | 0.475 9 | 0.453 1 | 0.476 7 | 0.369 0 |
DSST | 0.549 3 | 0.604 9 | 0.655 8 | 0.485 5 | 0.376 9 | 0.512 5 |
SAMF | 0.523 0 | 0.575 5 | 0.598 8 | 0.479 1 | 0.466 4 | 0.434 1 |
SRDCF | 0.557 3 | 0.623 8 | 0.663 0 | 0.490 4 | 0.444 5 | 0.524 9 |
ECO | 0.546 1 | 0.536 5 | 0.634 2 | 0.495 1 | 0.418 7 | 0.512 3 |
UDT | 0.531 0 | 0.588 6 | 0.650 2 | 0.482 7 | 0.439 7 | 0.479 8 |
Ours | 0.556 2 | 0.604 9 | 0.707 3 | 0.478 3 | 0.460 2 | 0.516 9 |
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 70.000 0 | 40.000 0 | 9.000 0 | 59.000 0 | 25.000 0 | 31.000 0 |
DSST | 69.000 0 | 37.000 0 | 7.000 0 | 61.000 0 | 28.000 0 | 33.000 0 |
SAMF | 51.000 0 | 32.000 0 | 7.000 0 | 48.000 0 | 24.000 0 | 27.000 0 |
SRDCF | 25.000 0 | 9.000 0 | 0.000 0 | 23.000 0 | 26.000 0 | 8.000 0 |
ECO | 35.000 0 | 14.000 0 | 2.000 0 | 29.000 0 | 26.000 0 | 14.000 0 |
UDT | 45.000 0 | 27.000 0 | 9.000 0 | 44.000 0 | 16.000 0 | 32.000 0 |
Ours | 42.000 0 | 25.000 0 | 8.000 0 | 45.000 0 | 19.000 0 | 27.000 0 |
Table 10 Robustness
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 70.000 0 | 40.000 0 | 9.000 0 | 59.000 0 | 25.000 0 | 31.000 0 |
DSST | 69.000 0 | 37.000 0 | 7.000 0 | 61.000 0 | 28.000 0 | 33.000 0 |
SAMF | 51.000 0 | 32.000 0 | 7.000 0 | 48.000 0 | 24.000 0 | 27.000 0 |
SRDCF | 25.000 0 | 9.000 0 | 0.000 0 | 23.000 0 | 26.000 0 | 8.000 0 |
ECO | 35.000 0 | 14.000 0 | 2.000 0 | 29.000 0 | 26.000 0 | 14.000 0 |
UDT | 45.000 0 | 27.000 0 | 9.000 0 | 44.000 0 | 16.000 0 | 32.000 0 |
Ours | 42.000 0 | 25.000 0 | 8.000 0 | 45.000 0 | 19.000 0 | 27.000 0 |
Algorithm | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
KCF | 0.472 1 | 48.040 1 | 0.170 7 | 25.451 8 |
DSST | 0.535 1 | 47.876 2 | 0.170 0 | 4.468 4 |
SAMF | 0.511 0 | 37.833 1 | 0.199 3 | 3.047 4 |
SRDCF | 0.551 0 | 16.952 5 | 0.315 9 | 0.374 6 |
ECO | 0.521 6 | 23.220 2 | 0.247 3 | 0.848 6 |
UDT | 0.521 4 | 35.318 3 | 0.192 8 | 8.268 0 |
Ours | 0.545 4 | 32.599 6 | 0.207 8 | 10.220 0 |
Table 11 Overall performance
Algorithm | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
KCF | 0.472 1 | 48.040 1 | 0.170 7 | 25.451 8 |
DSST | 0.535 1 | 47.876 2 | 0.170 0 | 4.468 4 |
SAMF | 0.511 0 | 37.833 1 | 0.199 3 | 3.047 4 |
SRDCF | 0.551 0 | 16.952 5 | 0.315 9 | 0.374 6 |
ECO | 0.521 6 | 23.220 2 | 0.247 3 | 0.848 6 |
UDT | 0.521 4 | 35.318 3 | 0.192 8 | 8.268 0 |
Ours | 0.545 4 | 32.599 6 | 0.207 8 | 10.220 0 |
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 0.483 7 | 0.541 4 | 0.475 9 | 0.453 1 | 0.476 7 | 0.369 0 |
DSST | 0.549 3 | 0.604 9 | 0.655 8 | 0.485 5 | 0.376 9 | 0.512 5 |
SAMF | 0.523 0 | 0.575 5 | 0.598 8 | 0.479 1 | 0.466 4 | 0.434 1 |
SRDCF | 0.532 8 | 0.604 0 | 0.646 9 | 0.472 5 | 0.425 1 | 0.505 1 |
ECO | 0.519 5 | 0.560 2 | 0.623 9 | 0.502 1 | 0.389 4 | 0.501 8 |
UDT | 0.533 0 | 0.528 9 | 0.677 3 | 0.482 9 | 0.406 2 | 0.488 6 |
Ours | 0.540 7 | 0.557 4 | 0.689 0 | 0.485 1 | 0.448 9 | 0.505 8 |
Table 12 Accuracy
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 0.483 7 | 0.541 4 | 0.475 9 | 0.453 1 | 0.476 7 | 0.369 0 |
DSST | 0.549 3 | 0.604 9 | 0.655 8 | 0.485 5 | 0.376 9 | 0.512 5 |
SAMF | 0.523 0 | 0.575 5 | 0.598 8 | 0.479 1 | 0.466 4 | 0.434 1 |
SRDCF | 0.532 8 | 0.604 0 | 0.646 9 | 0.472 5 | 0.425 1 | 0.505 1 |
ECO | 0.519 5 | 0.560 2 | 0.623 9 | 0.502 1 | 0.389 4 | 0.501 8 |
UDT | 0.533 0 | 0.528 9 | 0.677 3 | 0.482 9 | 0.406 2 | 0.488 6 |
Ours | 0.540 7 | 0.557 4 | 0.689 0 | 0.485 1 | 0.448 9 | 0.505 8 |
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 70.000 0 | 40.000 0 | 9.000 0 | 59.000 0 | 25.000 0 | 31.000 0 |
DSST | 69.000 0 | 37.000 0 | 7.000 0 | 61.000 0 | 28.000 0 | 33.000 0 |
SAMF | 51.000 0 | 32.000 0 | 7.000 0 | 48.000 0 | 24.000 0 | 27.000 0 |
SRDCF | 27.000 0 | 11.000 0 | 2.000 0 | 23.000 0 | 26.000 0 | 14.000 0 |
ECO | 28.000 0 | 12.000 0 | 1.000 0 | 25.000 0 | 28.000 0 | 13.000 0 |
UDT | 48.000 0 | 27.000 0 | 12.000 0 | 45.000 0 | 20.000 0 | 32.000 0 |
Ours | 38.000 0 | 22.000 0 | 8.000 0 | 46.000 0 | 18.000 0 | 27.000 0 |
Table 13 Robustness
Algorithm | Camera motion | Empty | Illumination change | Motion change | Occlusion | Size change |
---|---|---|---|---|---|---|
KCF | 70.000 0 | 40.000 0 | 9.000 0 | 59.000 0 | 25.000 0 | 31.000 0 |
DSST | 69.000 0 | 37.000 0 | 7.000 0 | 61.000 0 | 28.000 0 | 33.000 0 |
SAMF | 51.000 0 | 32.000 0 | 7.000 0 | 48.000 0 | 24.000 0 | 27.000 0 |
SRDCF | 27.000 0 | 11.000 0 | 2.000 0 | 23.000 0 | 26.000 0 | 14.000 0 |
ECO | 28.000 0 | 12.000 0 | 1.000 0 | 25.000 0 | 28.000 0 | 13.000 0 |
UDT | 48.000 0 | 27.000 0 | 12.000 0 | 45.000 0 | 20.000 0 | 32.000 0 |
Ours | 38.000 0 | 22.000 0 | 8.000 0 | 46.000 0 | 18.000 0 | 27.000 0 |
Algorithm | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
KCF | 0.491 6 | 38.082 0 | 0.193 5 | 22.253 5 |
DSST | 0.524 5 | 44.813 8 | 0.180 5 | 9.714 1 |
SAMF | 0.511 0 | 37.833 1 | 0.199 3 | 3.649 2 |
SRDCF | 0.529 6 | 18.926 6 | 0.295 5 | 0.405 4 |
ECO | 0.516 4 | 19.974 1 | 0.260 4 | 0.871 8 |
UDT | 0.502 5 | 36.218 2 | 0.192 6 | 7.268 0 |
Ours | 0.526 8 | 30.788 5 | 0.207 5 | 9.742 8 |
Table 14 Overall performance
Algorithm | Accuracy | Robustness | EAO | FPS |
---|---|---|---|---|
KCF | 0.491 6 | 38.082 0 | 0.193 5 | 22.253 5 |
DSST | 0.524 5 | 44.813 8 | 0.180 5 | 9.714 1 |
SAMF | 0.511 0 | 37.833 1 | 0.199 3 | 3.649 2 |
SRDCF | 0.529 6 | 18.926 6 | 0.295 5 | 0.405 4 |
ECO | 0.516 4 | 19.974 1 | 0.260 4 | 0.871 8 |
UDT | 0.502 5 | 36.218 2 | 0.192 6 | 7.268 0 |
Ours | 0.526 8 | 30.788 5 | 0.207 5 | 9.742 8 |
[1] | ZHANG T Z, XU C S, YANG M H. Multi-task correlation particle filter for robust object tracking[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 4819-4827. |
[2] | 张晶, 王旭, 范洪博. 时空上下文相似性的TLD目标跟踪算法[J]. 计算机科学与探索, 2018, 12(7): 1169-1181. |
ZHANG J, WANG X, FAN H B. TLD object tracking algorithm based on spatio-temporal context similarity[J]. Journal of Frontiers of Computer Science and Technology, 2018, 12(7): 1169-1181. | |
[3] | 陈晨, 邓赵红, 高艳丽, 等. 多模糊核融合的单目标跟踪算法[J]. 计算机科学与探索, 2020, 14(5): 848-860. |
CHEN C, DENG Z H, GAO Y L, et al. Single target tracking algorithm based on multi-fuzzy kernel fusion[J]. Journal of Frontiers of Computer Science and Technology, 2020, 14(5): 848-860. | |
[4] | CHOI J W, CHANG H J, YUN S, et al. Attentional corre-lation filter network for adaptive visual tracking[C]// Procee-dings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Was-hington: IEEE Computer Society, 2017: 4828-4837. |
[5] | 刘芳, 黄光伟, 路丽霞, 等. 自适应模板更新的鲁棒目标跟踪算法[J]. 计算机科学与探索, 2019, 13(1): 83-96. |
LIU F, HUANG G W, LU L X, et al. Robust target tracking algorithm for adaptive template updating[J]. Journal of Fron-tiers of Computer Science and Technology, 2019, 13(1): 83-96. | |
[6] | JIA X, LU H C, YANG M H. Visual tracking via adaptive structural local sparse appearance model[C]// Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, Jun 16-21, 2012. Washington: IEEE Computer Society, 2012: 1822-1829. |
[7] |
HARE S, GOLODETZ S, SAFFARI A, et al. Struck: struc-tured output tracking with kernels[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(10): 2096-2109.
DOI URL |
[8] | DANELLJAN M, KHAN F S, FELSBERG M, et al. Adap-tive color attributes for real-time visual tracking[C]// Procee-dings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Jun 23-28, 2014. Was-hington: IEEE Computer Society, 2014: 1090-1097. |
[9] | DALAL N, TRIGGS B. Histograms of oriented gradients for human detection[C]// Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recogni-tion, San Diego, Jun 20-25, 2005. Washington: IEEE Computer Society, 2005: 886-893. |
[10] | 孟琭, 杨旭. 目标跟踪算法综述[J]. 自动化学报, 2019, 45(7): 1244-1260. |
MENG L, YANG X. A survey of object tracking algorithms[J]. Acta Automatica Sinica, 2019, 45(7): 1244-1260. | |
[11] | LI F, TIAN C, ZUO W M, et al. Learning spatial-temporal regularized correlation filters for visual tracking[C]// Procee-dings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Piscataway: IEEE, 2018: 4904-4913. |
[12] | KRISTAN M, LEONARDIS A, MATAS J, et al. The sixth visual object tracking VOT2018 challenge results[C]// LNCS 11129: Proceedings of the 15th European Conference on Computer Vision, Munich, Sep 8-14, 2018. Cham: Springer, 2018: 3-53. |
[13] | KRISTAN M, BERG A, ZHENG L, et al. The seventh visual object tracking VOT 2019 challenge results[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop, Seoul, Oct 27-28, 2019. Piscataway: IEEE, 2019: 2206-2241. |
[14] | LI B, YAN J J, WU W, et al. High performance visual tracking with siamese region proposal network[C]// Procee-dings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Piscataway: IEEE, 2018: 8971-8980. |
[15] | CHOI J, CHANG H J, FISCHER T, et al. Context-aware deep feature compression for high-speed visual tracking[C]// Proceedings of the 2018 IEEE/CVF Conference on Com-puter Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Piscataway: IEEE, 2018: 479-488. |
[16] | VALMADRE J, BERTINETTO L, HENRIQUES J F, et al. End-to-end representation learning for correlation filter based tracking[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 5000-5008. |
[17] | SUN C, WANG D, LU H C, et al. Correlation tracking via joint discrimination and reliability learning[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Piscataway: IEEE, 2018: 489-497. |
[18] | WANG N, ZHOU W G, TIAN Q, et al. Multi-cue correlation filters for robust visual tracking[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Re-cognition, Salt Lake City, Jun 18-23, 2018. Piscataway: IEEE, 2018: 4844-4853. |
[19] | LUKEZIC A, VOJIR T, ZAJC L C, et al. Discriminative correlation filter with channel and spatial reliability[C]// Pro-ceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washin-gton: IEEE Computer Society, 2017: 4847-4856. |
[20] | LI X, MA C, WU B Y, et al. Target-aware deep tracking[C]// Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 1369-1378. |
[21] | BOLME D S, BEVERIDGE J R, DRAPER B A, et al. Visual object tracking using adaptive correlation filters[C]// Proceedings of the 23rd IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, Jun 13-18, 2010. Washington: IEEE Computer Society, 2010: 2544-2550. |
[22] |
HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583-596.
DOI URL |
[23] |
DANELLJAN M, HÄGER G, KHAN F S, et al. Discrimina-tive scale space tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(8): 1561-1575.
DOI URL |
[24] | WU Y, LIM J, YANG M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelli-gence, 2015, 37(9): 1834-1848. |
[25] | KRISTAN M, MATAS J, LEONARDIS A, et al. The visual object tracking VOT 2015 challenge results[C]// Proceedings of the 2015 IEEE International Conference on Computer Vision Workshop, Santiago, Dec 7-13, 2015. Washington: IEEE Computer Society, 2015: 564-586. |
[26] | KRISTAN M, LEONARDIS A, MATAS J, et al. The visual object tracking VOT2016 challenge results[C]// LNCS 9914: Proceedings of the 14th European Conference on Computer Vision, Amsterdam, Oct 8-10, 15-16, 2016. Cham: Springer, 2016: 777-823. |
[27] | WANG N, SONG Y B, MA C, et al. Unsupervised deep tracking[C]// Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 15-20, 2019. Piscataway: IEEE, 2019: 1308-1317. |
[28] |
RUSSAKOVSKY O, DENG J, SU H, et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211-252.
DOI URL |
[29] | SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv:1409. 1556, 2014. |
[30] | VEDALDI A, LENC K. MatConvNet: convolutional neural networks for MATLAB[C]// Proceedings of the 23rd Annual ACM Conference on Multimedia Conference, Brisbane, Oct 26-30, 2015. New York: ACM, 2015: 689-692. |
[31] | WU Y, LIM J, YANG M H. Online object tracking: a bench-mark[C]// Proceedings of the 2013 IEEE Conference on Com-puter Vision and Pattern Recognition, Portland, Jun 23-28, 2013. Washington: IEEE Computer Society, 2013: 2411-2418. |
[32] | DANELLJAN M, BHAT G, KHAN F S, et al. ECO: effi-cient convolution operators for tracking[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 6931-6939. |
[33] | WANG M M, LIU Y, HUANG Z Y. Large margin object tracking with circulant feature maps[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Reco-gnition, Honolulu, Jul 21-26, 2017. Washington: IEEE Com-puter Society, 2017: 4800-4808. |
[34] | DANELLJAN M, HÄGER G, KHAN F S, et al. Learning spatially regularized correlation filters for visual tracking[C]// Proceedings of the 2015 IEEE International Conference on Computer Vision, Dec 7-13, 2015. Washington: IEEE Com-puter Society, 2015: 4310-4318. |
[35] | BERTINETTO L, VALMADRE J, GOLODETZ S, et al. Staple: complementary learners for real-time tracking[C]// Proceedings of the 2016 IEEE Conference on Computer Vi-sion and Pattern Recognition, Las Vegas, Jun 27-30, 2016. Washington: IEEE Computer Society, 2016: 1401-1409. |
[36] | LI Y, ZHU J K. A scale adaptive kernel correlation filter tracker with feature integration[C]// LNCS 8926: Proceedings of the 13th European Conference on Computer Vision, Zurich, Sep 6-7, 12, 2014. Cham: Springer, 2014: 254-265. |
[1] | YANG Zheng, DENG Zhaohong, LUO Xiaoqing, GU Xin, WANG Shitong. Target Tracking System Constructed by ELM-AE and Transfer Representation Learning [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(7): 1633-1648. |
[2] | PENG Hao, LI Xiaoming. Multi-scale Selection Pyramid Networks for Small-Sample Target Detection Algorithms [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(7): 1649-1660. |
[3] | LIU Yi, LI Mengmeng, ZHENG Qibin, QIN Wei, REN Xiaoguang. Survey on Video Object Tracking Algorithms [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(7): 1504-1515. |
[4] | CHENG Weiyue, ZHANG Xueqin, LIN Kezheng, LI Ao. Deep Convolutional Neural Network Algorithm Fusing Global and Local Features [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(5): 1146-1154. |
[5] | BAO Guangbin, LI Gangle, WANG Guoxiong. Bimodal Interactive Attention for Multimodal Sentiment Analysis [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(4): 909-916. |
[6] | ZHAO Pengfei, XIE Linbo, PENG Li. Deep Small Object Detection Algorithm Integrating Attention Mechanism [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(4): 927-937. |
[7] | CHENG Shilong, XIE Linbo, PENG Li. Gradient-Guided Object Tracking Algorithm with Channel Selection [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(3): 649-660. |
[8] | WANG Yanni, YU Lixian. SSD Object Detection Algorithm with Effective Fusion of Attention and Multi-scale [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(2): 438-447. |
[9] | QIAN Wu, WANG Guozhong, LI Guoping. Improved YOLOv5 Traffic Light Real-Time Detection Robust Algorithm [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(1): 231-241. |
[10] | LI Zhixin, CHEN Shengjia, ZHOU Tao, MA Huifang. Combining Cascaded Network and Adversarial Network for Object Detection [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(1): 217-230. |
[11] | CHEN Fan, PENG Li. Person Re-identification Based on Multi-level Feature Fusion with Overlapping Stripes [J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(9): 1753-1761. |
[12] | ZHAO Xiaoqiang, XU Huiping. Image Semantic Segmentation Method with Hierarchical Feature Fusion [J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(5): 949-957. |
[13] | ZHANG Jing, HUANG Haomiao. Multi-convolutional Layer Feature Response Tracking Algorithm Combined with Re-detection Mechanism [J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(3): 533-544. |
[14] | CHEN Haoran, PENG Li. Detection Algorithm of Small Target in Receptive Field Block [J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(2): 346-353. |
[15] | LI Wentao, PENG Li. Small Objects Detection Algorithm with Multi-scale Channel Attention Fusion Network [J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(12): 2390-2400. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||
/D:/magtech/JO/Jwk3_kxyts/WEB-INF/classes/