[1] FAN D P, JI G P, SUN G L, et al. Camouflaged object detection[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 2774-2784.
[2] NAFUS M G, GERMANO J M, PERRY J A, et al. Hiding in plain sight: a study on camouflage and habitat selection in a slow-moving desert herbivore[J]. Behavioral Ecology, 2015, 26(5): 1389-1394.
[3] CHEN G, LIU S J, SUN Y J, et al. Camouflaged object detection via context-aware cross-level fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(10): 6981-6993.
[4] CARION N, MASSA F, SYNNAEVE G, et al. End-to-end object detection with transformers[EB/OL]. [2023-09-16]. https://arxiv.org/abs/2005.12872.
[5] TANKUS A, YESHURUN Y. Convexity-based visual camouflage breaking[J]. Computer Vision and Image Understanding, 2001, 82(3): 208-237.
[6] BHAJANTRI N U, NAGABHUSHAN P. Camouflage defect identification: a novel approach[C]//Proceedings of the 9th International Conference on Information Technology. Piscataway: IEEE, 2006: 145-148.
[7] XUE F, YONG C X, XU S, et al. Camouflage performance analysis and evaluation framework based on features fusion[J]. Multimedia Tools and Applications, 2016, 75(7): 4065-4082.
[8] PIKE T W. Quantifying camouflage and conspicuousness using visual salience[J]. Methods in Ecology and Evolution, 2018, 9(8): 1883-1895.
[9] SUN Y J, CHEN G, ZHOU T, et al. Context-aware cross-level fusion network for camouflaged object detection[EB/OL]. [2023-09-16]. https://arxiv.org/abs/2105.12555.
[10] LE T N, NGUYEN T V, NIE Z L, et al. Anabranch network for camouflaged object segmentation[J]. Computer Vision and Image Understanding, 2019, 184: 45-56.
[11] YAN J N, LE T N, NGUYEN K D, et al. MirrorNet: bio-inspired camouflaged object segmentation[J]. IEEE Access, 2021, 9: 43290-43300.
[12] ZHAI Q, LI X, YANG F, et al. Mutual graph learning for camouflaged object detection[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 12992-13002.
[13] REN J J, HU X W, ZHU L, et al. Deep texture-aware features for camouflaged object detection[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2019, 33: 1157-1167.
[14] LI A X, ZHANG J, LV Y Q, et al. Uncertainty-aware joint salient object and camouflaged object detection[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 10066-10076.
[15] CHEN G, CHEN X R, DONG B, et al. Towards accurate camouflaged object detection with mixture convolution and interactive fusion[EB/OL]. [2023-09-16]. https://arxiv.org/abs/2101.05687.
[16] FAN D P, JI G P, SUN G L, et al. Camouflaged object detection[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 2774-2784.
[17] MEI H Y, JI G P, WEI Z Q, et al. Camouflaged object segmentation with distraction mining[EB/OL]. [2023-09-16]. https://arxiv.org/abs/2104.10475.
[18] KIM J, PAVLOVIC V. A shape-based approach for salient object detection using deep learning[C]// Proceedings of the 14th European Conference on Computer Vision. Cham: Springer, 2016: 455-470.
[19] LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 3431-3440.
[20] ZHUGE M C, FAN D P, LIU N, et al. Salient object detection via integrity learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(3): 3738-3752.
[21] LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 936-944.
[22] FAN Q, FAN D P, FU H Z, et al. Group collaborative learning for co-salient object detection[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 12283-12293.
[23] CHEN S H, TAN X L, WANG B, et al. Reverse attention for salient object detection[C]//Proceedings of the 15th European Conference on Computer Vision. Cham: Springer, 2018: 236-252.
[24] CHANDRA S, USUNIER N, KOKKINOS I. Dense and low-rank Gaussian CRFs using deep embeddings[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 5113-5122.
[25] LI Y, GUPTA A. Beyond grids: learning graph representations for visual recognition[C]//Advances in Neural Information Processing Systems 31, Montréal,Dec 3-8, 2018: 9245-9255.
[26] LU Y, CHEN Y R, ZHAO D B, et al. Graph-FCN for image semantic segmentation[C]//Proceedings of the 2019 International Symposium on Neural Networks. Cham: Springer, 2019: 97-105.
[27] POURIAN N, KARTHIKEYAN S, MANJUNATH B S. Weakly supervised graph based semantic segmentation by learning communities of image-parts[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2015: 1359-1367.
[28] ZHANG L, LI X T, ARNAB A, et al. Dual graph convolutional network for semantic segmentation[EB/OL]. [2023-09-16]. https://arxiv.org/abs/1909.06121.
[29] HOU R, CHANG H, MA B, et al. Cross attention network for few-shot classification[C]//Advances in Neural Information Processing Systems 32, Vancouver, Dec 8-14, 2019: 4003-4014.
[30] LIU C. Learning a few-shot embedding model with contrastive learning[C]//Proceedings of the 35th AAAI Conference on Artificial Intelligence, the 33rd Conference on Innovative Applications of Artificial Intelligence, the 11th Symposium on Educational Advances in Artificial Intelligence.Menlo Park: AAAI, 2021: 8635-8643.
[31] SIMON C, KONIUSZ P, NOCK R, et al. Adaptive subspaces for few-shot learning[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 4135-4144.
[32] LIFCHITZ Y, AVRITHIS Y, PICARD S, et al. Dense classification and implanting for few-shot learning[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 9250- 9259.
[33] HARIHARAN B, GIRSHICK R. Low-shot visual recognition by shrinking and hallucinating features[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 3037-3046.
[34] ZHANG R, CHE T, GHAHRAMANI Z, et al. MetaGAN: an adversarial approach to few-shot learning[C]//Advances in Neural Information Processing Systems 31, Montréal,Dec 3-8, 2018: 2371-2380.
[35] LI K, ZHANG Y L, LI K P, et al. Adversarial feature hallucination networks for few-shot learning[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 13467-13476.
[36] SUNG F, YANG Y X, ZHANG L, et al. Learning to compare: relation network for few-shot learning[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 1199-1208.
[37] ZHANG X T, QIANG Y T, SUNG F, et al. RelationNet2: deep comparison columns for few-shot learning[EB/OL]. [2023-09-16]. https://arxiv.org/abs/1811.07100.
[38] HAO F S, HE F X, CHENG J, et al. Collect and select: semantic alignment metric learning for few-shot learning[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 8459-8468.
[39] ZHANG C, CAI Y J, LIN G S, et al. DeepEMD: few-shot image classification with differentiable earth mover’s distance and structured classifiers[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 12200-12210.
[40] WANG X L, GIRSHICK R, GUPTA A, et al. Non-local neural networks[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 7794-7803.
[41] TE G S, LIU Y L, HU W, et al. Edge-aware graph representation learning and reasoning for face parsing[C]//Proceedings of the 16th European Conference on Computer Vision. Cham: Springer, 2020: 258-274.
[42] HE K M, FAN H Q, WU Y X, et al. Momentum contrast for unsupervised visual representation learning[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 9726- 9735.
[43] LV Y Q, ZHANG J, DAI Y C, et al. Simultaneously localize, segment and rank the camouflaged objects[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 11586- 11596.
[44] PERAZZI F, KRÄHENBÜHL P, PRITCH Y, et al. Saliency filters: contrast based filtering for salient region detection[C]//Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2012: 733-740.
[45] FAN D P, JI G P, QIN X, et al. Cognitive vision inspired object segmentation metric and loss function[J]. Scientia Sinica Informationis, 2021, 51(9): 1475.
[46] FAN D P, CHENG M M, LIU Y, et al. Structure-measure: a new way to evaluate foreground maps[EB/OL]. [2023-09-16]. https://arxiv.org/abs/1708.00786.
[47] MARGOLIN R, ZELNIK-MANOR L, TAL A. How to evaluate foreground maps[C]//Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2014: 248-255.
[48] ZHANG D W, ZHENG Z L, LI M L, et al. CSART: channel and spatial attention-guided residual learning for real-time object tracking[J]. Neurocomputing, 2021, 436: 260-272. |