Journal of Frontiers of Computer Science and Technology ›› 2024, Vol. 18 ›› Issue (12): 3080-3099.DOI: 10.3778/j.issn.1673-9418.2404001
• Frontiers·Surveys • Previous Articles Next Articles
XU Yuhui, PAN Zhisong, XU Kun
Online:
2024-12-01
Published:
2024-11-29
徐宇晖,潘志松,徐堃
XU Yuhui, PAN Zhisong, XU Kun. Review of Research on Adversarial Attack in Three Kinds of Images[J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(12): 3080-3099.
徐宇晖, 潘志松, 徐堃. 面向三种形态图像的对抗攻击研究综述[J]. 计算机科学与探索, 2024, 18(12): 3080-3099.
Add to citation manager EndNote|Ris|BibTeX
URL: http://fcst.ceaj.org/EN/10.3778/j.issn.1673-9418.2404001
[1] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[EB/OL]. [2024-02-25]. https://arxiv.org/abs/1312.6199. [2] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[EB/OL]. [2024-02-25]. https://arxiv.org/abs/1412.6572. [3] KURAKIN A, GOODFELLOW I J, BENGIO S. Adversarial examples in the physical world[M]//Artificial Intelligence Safety and Security. Chapman and Hall/CRC, 2018: 99-112. [4] DONG Y, LIAO F, PANG T, et al. Boosting adversarial attacks with momentum[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. Washington: IEEE Computer Society, 2018: 9185-9193. [5] XIE C, ZHANG Z, ZHOU Y, et al. Improving transferability of adversarial examples with input diversity[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 2730-2739. [6] DONG Y, PANG T, SU H, et al. Evading defenses to transferable adversarial examples by translation-invariant attacks[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 4312-4321. [7] LIN J, SONG C, HE K, et al. Nesterov accelerated gradient and scale invariance for adversarial attacks[EB/OL]. [2024-02-25]. https://arxiv.org/abs/1908.06281. [8] LIU Y, CHEN X, LIU C, et al. Delving into transferable adversarial examples and black-box attacks[EB/OL]. [2024-02-25]. https://arxiv.org/abs/1611.02770. [9] LI Y, BAI S, ZHOU Y, et al. Learning transferable adversarial examples via ghost networks[C]//Proceedings of the 2020 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2020: 11458-11465. [10] PAPERNOT N, MCDANIEL P, JHA S, et al. The limitations of deep learning in adversarial settings[C]//Proceedings of the 2016 IEEE European Symposium on Security and Privacy. Piscataway: IEEE, 2016: 372-387. [11] MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. Deepfool: a simple and accurate method to fool deep neural networks[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Washington: IEEE Computer Society, 2016: 2574-2582. [12] MOOSAVI-DEZFOOLI S M, FAWZI A, FAWZI O, et al. Universal adversarial perturbations[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Washington: IEEE Computer Society, 2017: 1765-1773. [13] CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]//Proceedings of the 2017 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2017: 39-57. [14] ATHALYE A, ENGSTROM L, ILYAS A, et al. Synthesizing robust adversarial examples[C]//Proceedings of the 35th International Conference on Machine Learning, Stockholmsm?ssan, Jul 10-15, 2018: 284-293. [15] ATHALYE A, CARLINI N, WAGNER D. Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples[C]//Proceedings of the 35th International Conference on Machine Learning, Stockholmsm?ssan, Jul 10-15, 2018: 274-283. [16] BALUJA S, FISCHER I. Learning to attack: adversarial trans-formation networks[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2018: 2687-2695. [17] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[EB/OL]. [2024-02-25]. https://arxiv.org/abs/1706.06083. [18] PAPERNOT N, MCDANIEL P, GOODFELLOW I, et al. Practical black-box attacks against machine learning[C]//Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security. New York: ACM, 2017: 506-519. [19] SHI Y, WANG S, HAN Y. Curls & whey: boosting black-box adversarial attacks[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Piscataway: IEEE, 2019: 6519-6527. [20] ZHOU M, WU J, LIU Y, et al. Dast: data-free substitute training for adversarial attacks[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Piscataway: IEEE, 2020: 234-243. [21] LI M, DENG C, LI T, et al. Towards transferable targeted attack[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 641-649. [22] ZHU Y, CHEN Y, LI X, et al. Toward understanding and boosting adversarial transferability from a distribution perspective[J]. IEEE Transactions on Image Processing, 2022, 31: 6487-6501. [23] CHEN P Y, ZHANG H, SHARMA Y, et al. Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models[C]//Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. New York: ACM, 2017: 15-26. [24] BRENDEL W, RAUBER J, BETHGE M. Decision-based adversarial attacks: reliable attacks against black-box machine learning models[EB/OL]. [2024-02-25]. https://arxiv.org/abs/1712.04248. [25] UESATO J, O’DONOGHUE B, KOHLI P, et al. Adversarial risk and the dangers of evaluating against weak attacks[C]//Proceedings of the 35th International Conference on Machine Learning, Stockholmsm?ssan, Jul 10-15, 2018: 5025-5034. [26] TU C C, TING P, CHEN P Y, et al. Autozoom: autoencoder-based zeroth order optimization method for attacking black-box neural networks[C]//Proceedings of the 2019 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2019: 742-749. [27] BRUNNER T, DIEHL F, LE M T, et al. Guessing smart: biased sampling for efficient black-box adversarial attacks[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 4958-4966. [28] LI Y, LI L, WANG L, et al. Nattack: learning the distributions of adversarial examples for an improved black-box attack on deep neural networks[C]//Proceedings of the 36th International Conference on Machine Learning, Long Beach, Jun 9-15, 2019: 3866-3876. [29] MOHAGHEGH DOLATABADI H, ERFANI S, LECKIE C. Advflow: inconspicuous black-box adversarial attacks using normalizing flows[C]//Advances in Neural Information Processing Systems 33, Dec 6-12, 2020: 15871-15884. [30] SU J, VARGAS D V, SAKURAI K. One pixel attack for fooling deep neural networks[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(5): 828-841. [31] BHAGOJI A N, HE W, LI B, et al. Practical black-box attacks on deep neural networks using efficient query mechanisms[C]//Proceedings of the 15th European Conference on Computer Vision. Cham: Springer, 2018: 154-169. [32] LI H, LI L, XU X, et al. Nonlinear projection based gradient estimation for query efficient blackbox attacks[C]//Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, Apr 13-15, 2021: 3142-3150. [33] DU J, ZHANG H, ZHOU J T, et al. Query-efficient meta attack to deep neural networks[C]//Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Apr 26-30, 2020. [34] WILLIAMS P N, LI K. Black-box sparse adversarial attack via multi-objective optimisation[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Reco-gnition. Piscataway: IEEE, 2023: 12291-12301. [35] WONG E, KOLTER J Z. Learning perturbation sets for robust machine learning[EB/OL]. [2024-02-25]. https://arxiv.org/abs/2007.08450. [36] QIU H, YAN X, YANG L, et al. Semanticadv: generating adversarial examples via attribute-conditioned image editing[C]//Proceedings of the 16th European Conference on Computer Vision, Glasgow, Aug 23-28, 2020. Cham: Springer, 2020: 19-37. [37] XIAO C, LI B, ZHU J Y, et al. Generating adversarial examples with adversarial networks[EB/OL]. [2024-02-25]. https://arxiv.org/abs/1801.02610. [38] MANGLA P, JANDIAL S, VARSHNEY S, et al. AdvGAN++: harnessing latent layers for adversary generation[EB/OL].[2024-02-25]. https://arxiv.org/abs/1908.00706. [39] NASEER M, KHAN S, HAYAT M, et al. On generating transferable targeted perturbations[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision.Piscataway: IEEE, 2021: 7708-7717. [40] POURSAEED O, KATSMAN I, GAO B, et al. Generative adversarial perturbations[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition.Washington: IEEE Computer Society, 2018: 4422-4431. [41] HO J, JAIN A, ABBEEL P. Denoising diffusion probabilistic models[C]//Advances in Neural Information Processing Systems 33, Dec 6-12, 2020: 6840-6851. [42] ROMBACH R, BLATTMANN A, LORENZ D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 10684-10695. [43] CHEN J, CHEN H, CHEN K, et al. Diffusion models for imperceptible and transferable adversarial attack[EB/OL].[2024-02-25]. https://arxiv.org/abs/2305.08192. [44] SONG J, MENG C, ERMON S. Denoising diffusion implicit models[EB/OL]. [2024-02-25]. https://arxiv.org/abs/2010.02502. [45] LIU J, LAU C P, CHELLAPPA R. Diffprotect: generate adversarial examples with diffusion models for facial privacy protection[EB/OL]. [2024-02-25]. https://arxiv.org/abs/2305.13625. [46] DAI X, LIANG K, XIAO B. Advdiff: generating unrestricted adversarial examples using diffusion models[EB/OL]. [2024-02-25]. https://arxiv.org/abs/2307.12499. [47] LIU D, WANG X, PENG C, et al. Adv-diffusion: imperceptible adversarial face identity attack via latent diffusion model[C]//Proceedings of the 2024 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2024: 3585-3593. [48] AVRAHAMI O, FRIED O, LISCHINSKI D. Blended latent diffusion[J]. ACM Transactions on Graphics, 2023, 42(4): 1-11. [49] CHEN X, GAO X, ZHAO J, et al. Advdiffuser: natural adversarial example synthesis with diffusion models[C]//Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2023: 4562-4572. [50] CHEN Z, LI B, WU S, et al. Content-based unrestricted adversarial attack[C]//Advances in Neural Information Processing Systems 36, New Orleans, Dec 10-16, 2023. [51] XUE H, ARAUJO A, HU B, et al. Diffusion-based adversarial sample generation for improved stealthiness and controllability[C]//Advances in Neural Information Processing Systems 36, New Orleans, Dec 10-16, 2023. [52] KANG M, SONG D, LI B. Evasion attacks against diffusion-based adversarial purification[C]//Advances in Neural Information Processing Systems 36, New Orleans, Dec 10-16, 2023. [53] HU C, LI Y, FENG Z, et al. Towards transferable attack via adversarial diffusion in face recognition[J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 1011-1023. [54] BROWN T B, MANé D, ROY A, et al. Adversarial patch[EB/OL]. [2024-02-25]. https://arxiv.org/abs/1712.09665. [55] YANG X, LIU C, XU L, et al. Towards effective adversarial textured 3D meshes on physical face recognition[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 4119-4128. [56] WANG H S H, CORNELIUS C, EDWARDS B, et al. Toward few-step adversarial training from a frequency perspective[C]//Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligence. New York: ACM, 2020: 11-19. [57] WANG H, WU X, HUANG Z, et al. High-frequency component helps explain the generalization of convolutional neural networks[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 8684-8694. [58] LUO C, LIN Q, XIE W, et al. Frequency-driven imperceptible adversarial attack on semantic similarity[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 15315-15324. [59] GUO C, FRANK J S, WEINBERGER K Q. Low frequency adversarial perturbation[C]//Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence, Tel Aviv, Jul 22-25, 2019: 1127-1137. [60] DUAN R, CHEN Y, NIU D, et al. AdvDrop: adversarial attack to DNNs by dropping information[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 7506-7515. [61] LI X C, ZHANG X Y, YIN F, et al. Decision-based adversarial attack with frequency Mixup[J]. IEEE Transactions on Information Forensics and Security, 2022, 17: 1038-1052. [62] CHEN Y, REN Q, YAN J. Rethinking and improving robustness of convolutional neural networks: a Shapley value-based approach in frequency domain[C]//Advances in Neural Information Processing Systems 35, New Orleans, Nov 28-Dec 9, 2022: 324-337. [63] YIN B, WANG W, YAO T, et al. Adv-Makeup: a new imperceptible and transferable attack on face recognition[C]// Proceedings of the 30th International Joint Conference on Artificial Intelligence, Aug 19-27, 2021: 1252-1258. [64] HU S, LIU X, ZHANG Y, et al. Protecting facial privacy: generating adversarial identity masks via style-robust makeup transfer[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 15014-15023. [65] LIU J, LU B, XIONG M, et al. Adversarial attack with raindrops[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023. [66] ZHONG Y, LIU X, ZHAI D, et al. Shadows can be dangerous: stealthy and effective physical-world adversarial attack by natural phenomenon[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 15345-15354. [67] WANG D, YAO W, JIANG T, et al. RFLA: a stealthy reflected light adversarial attack in the physical world[C]//Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2023: 4455-4465. [68] ZHU W, JI X, CHENG Y, et al. Tpatch: a triggered physical adversarial patch[EB/OL]. [2024-02-25]. https://arxiv.org/abs/2401.00148. [69] ZHANG S, CHENG Y, ZHU W, et al. CAPatch: physical adversarial patch against image captioning systems[C]//Proceedings of the 32nd USENIX Security Symposium, Anaheim, Aug 9-11, 2023: 679-696. [70] HU C, SHI W. Adversarial color film: effective physical-world attack to DNNs[EB/OL]. [2024-02-25]. https://arxiv.org/abs/2209.02430. [71] EDWARDS D M, RAWAT D B. Study of adversarial machine learning with infrared examples for surveillance applications[J]. Electronics, 2020, 9(8): 1284. [72] OSAHOR U M, NASRABADI N M. Deep adversarial attack on target detection systems[C]//Proceedings of the 2019 AAAI Conference on Artificial Intelligence and Machine Learning for Multi-domain Operations Applications. Menlo Park: AAAI, 2019: 620-628. [73] ZHU X, LI X, LI J, et al. Fooling thermal infrared pedestrian detectors in real world using small bulbs[C]//Proceedings of the 2021 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2021: 3616-3624. [74] ZHU X, HU Z, HUANG S, et al. Infrared invisible clothing: hiding from infrared detectors at multiple angles in real world[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 13317-13326. [75] WEI H, WANG Z, JIA X, et al. Hotcold Block: fooling thermal infrared detectors with a novel wearable design[C]//Proceedings of the 2023 AAAI Conference on Artificial intelligence. Menlo Park: AAAI, 2023: 15233-15241. [76] WEI X, YU J, HUANG Y. Physically adversarial infrared patches with learnable shapes and locations[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 12334-12342. [77] HU C, SHI W, JIANG T, et al. Adversarial infrared blocks: a multi-view black-box attack to thermal infrared detectors in physical world[J]. Neural Networks, 2024, 175: 106310. [78] WEI X, HUANG Y, SUN Y, et al. Unified adversarial patch for cross-modal attacks in the physical world[C]//Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2023: 4445-4454. [79] HU C, SHI W. Two-stage optimized unified adversarial patch for attacking visible-infrared cross-modal detectors in the physical world[EB/OL]. [2024-02-25]. https://arxiv.org/abs/2312.01789. [80] XU Y, DU B, ZHANG L. Assessing the threat of adversarial examples on deep neural networks for remote sensing scene classification: attacks and defenses[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 59(2): 1604-1617. [81] XU Y, GHAMISI P. Universal adversarial examples in remote sensing: methodology and benchmark[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 1-15. [82] CHEN L, XU Z, LI Q, et al. An empirical study of adversarial examples on remote sensing image scene classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(9): 7419-7433. [83] LI H, HUANG H, CHEN L, et al. Adversarial examples for CNN-based SAR image classification: an experience study[J]. IEEE Journal of Selected Topics in Applied Earth Obser-vations and Remote Sensing, 2020, 14: 1333-1347. [84] PENG B, PENG B, ZHOU J, et al. Speckle-variant attack: toward transferable adversarial attack to SAR target recognition[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 1-5. [85] GERRY M J, POTTER L C, GUPTA I J, et al. A parametric model for synthetic aperture radar measurements[J]. IEEE Transactions on Antennas and Propagation, 1999, 47(7): 1179-1188. [86] PENG B, PENG B, ZHOU J, et al. Scattering model guided adversarial examples for SAR target recognition: attack and defense[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 1-17. [87] ZHOU J, FENG S, SUN H, et al. Attributed scattering center guided adversarial attack for DCNN SAR target recognition[J]. IEEE Geoscience and Remote Sensing Letters, 2023, 20. [88] QIN W, LONG B, WANG F. SCMA: a scattering center model attack on CNN-SAR target recognition[J]. IEEE Geoscience and Remote Sensing Letters, 2023, 20. [89] XIA W J, LIU Z, LI Y. SAR-PeGA: a generation method of adversarial examples for SAR image target recognition network[J]. IEEE Transactions on Aerospace and Electronic Systems, 2022, 59(2): 1910-1920. [90] 周隽凡, 孙浩, 雷琳, 等. SAR图像稀疏对抗攻击[J]. 信号处理, 2021, 37(9): 1633-1643. ZHOU J F, SUN H, LEI L, et al. Sparse adversarial attacks of SAR image[J]. Journal of Signal Processing, 2021, 37(9): 1633-1643. [91] DUAN J, QIU L, HE G, et al. A region-adaptive local perturbation-based method for generating adversarial examples in synthetic aperture radar object detection[J]. Remote Sensing, 2024, 16(6): 997. [92] ZHOU J, PENG B, XIE J Y, et al. Conditional random field-based adversarial attack against SAR target detection[J]. IEEE Geoscience and Remote Sensing Letters, 2024, 21: 1-5. [93] CUI J, SHAO R, LI H. Physics-oriented adversarial attacks on SAR image target recognition[C]//Proceedings of the 2nd Workshop on New Frontiers in Adversarial Machine Learning, Menlo Park, 2023: 7-13. [94] MENG T, ZHANG F, MA F. A target-region-based SAR ATR adversarial deception method[C]//Proceedings of the 2022 7th International Conference on Signal and Image Processing. Piscataway: IEEE, 2022: 142-146. [95] ZHANG F, MENG T, XIANG D, et al. Adversarial deception against SAR target recognition network[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022, 15: 4507-4520. [96] DU M, BI D. Local aggregative attack on SAR image classification models[C]//Proceedings of the 2022 IEEE 6th Advanced Information Technology, Electronic and Automation Control Conference. Piscataway: IEEE, 2022: 1519-1524. [97] OTSU N. A threshold selection method from gray-level histograms[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1979, 9(1): 62-66. [98] ZHANG L, JIANG T, GAO S, et al. Generating adversarial examples on Sar images by optimizing flow field directly in frequency domain[C]//Proceedings of the 2022 IEEE International Geoscience and Remote Sensing Symposium.Piscataway: IEEE, 2022: 2979-2982. [99] LIN G, PAN Z, ZHOU X, et al. Boosting adversarial transferability with shallow-feature attack on SAR images[J]. Remote Sensing, 2023, 15(10): 2699. [100] 徐延杰, 孙浩, 雷琳, 等. 基于稀疏差分协同进化的多源遥感场景分类攻击[J]. 信号处理, 2021, 37(7): 1164-1170. XU Y J, SUN H, LEI L, et al. Multi-source remote sensing classification attack based on sparse differential coevolution[J]. Journal of Signal Processing, 2021, 37(7): 1164-1170. [101] DU C, HUO C, ZHANG L, et al. Fast C&W: a fast adversarial attack algorithm to fool SAR target recognition with deep convolutional neural networks[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 19: 1-5. [102] DU M, SUN Y, SUN B, et al. TAN: DVT[J]. Drones, 2023, 7(3): 205. [103] DU M, BI D, DU M, et al. ULAN: a universal local adversarial network for SAR target recognition based on layer-wise relevance propagation[J]. Remote Sensing, 2022, 15(1): 21. [104] DU C, ZHANG L. Adversarial attack for SAR target recognition based on UNet-generative adversarial network[J]. Remote Sensing, 2021, 13(21): 4358. [105] WAN X, LIU W, NIU C, et al. Black-box universal adversarial attack for DNN-based models of SAR automatic target recognition[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2024, 17: 8673-8696. [106] WANG Y, ZOU D, YI J, et al. Improving adversarial robustness requires revisiting misclassified examples[C]//Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Apr 26-30, 2020. [107] GOWAL S, QIN C, HUANG P S, et al. Achieving robustness in the wild via adversarial mixing with disentangled representations[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 1211-1220. [108] LEVER J, KRZYWINSKI M, ALTMAN N. Points of significance: principal component analysis[J]. Nature Methods, 2017, 14(7): 641-643. [109] FEINMAN R, CURTIN R R, SHINTRE S, et al. Detecting adversarial samples from artifacts[EB/OL]. [2024-02-25]. https://arxiv.org/abs/1703.00410. [110] LIU Z, LIU Q, LIU T, et al. Feature distillation: DNN-oriented JPEG compression against adversarial examples[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 860-868. [111] LIAO F, LIANG M, DONG Y, et al. Defense against adversarial attacks using high-level representation guided denoiser[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Washington: IEEE Computer Society, 2018: 1778-1787. [112] 张田, 杨奎武, 魏江宏. 面向图像数据的对抗样本检测与防御技术综述[J]. 计算机研究与发展, 2022, 59(6): 1315-1328. ZHANG T, YANG K W, WEI J H. Survey on detecting and defending adversarial examples for image data[J]. Journal of Computer Research and Development, 2022, 59(6): 1315-1328. [113] 王曙燕, 金航, 孙家泽. GAN图像对抗样本生成方法[J]. 计算机科学与探索, 2021, 15(4): 702-711. WANG S Y, JIN H, SUN J Z. Method for image adversarial samples generating based on GAN[J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(4): 702-711. [114] 刘瑞祺, 李虎, 王东霞, 等. 图像对抗样本防御技术研究综述[J]. 计算机科学与探索, 2023, 17(12): 2827-2839. LIU R Q, LI H, WANG D X, et al. Survey of image adversarial example defense techniques[J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(12): 2827-2839. |
[1] | LI Ziqi, SU Yuxuan, SUN Jun, ZHANG Yonghong, XIA Qingfeng, YIN Hefeng. Critical Review of Multi-focus Image Fusion Based on Deep Learning Method [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(9): 2276-2292. |
[2] | FANG Boru, QIU Dawei, BAI Yang, LIU Jing. Review of Application of Surface Electromyography Signals in Muscle Fatigue Research [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(9): 2261-2275. |
[3] | LIAN Zhe, YIN Yanjun, ZHI Min, XU Qiaozhi. Review of Differentiable Binarization Techniques for Text Detection in Natural Scenes [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(9): 2239-2260. |
[4] | WANG Yousong, PEI Junpeng, LI Zenghui, WANG Wei. Review of Research on Deep Learning in Retinal Blood Vessel Segmentation [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(8): 1960-1978. |
[5] | WU Tao, CAO Xinwen, XIAN Xingping, YUAN Lin, ZHANG Shu, CUI Canyixing, TIAN Kan. Advances of Adversarial Attacks and Robustness Evaluation for Graph Neural Networks [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(8): 1935-1959. |
[6] | YE Qingwen, ZHANG Qiuju. Multi-label Image Recognition Using Channel Pixel Attention [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(8): 2109-2117. |
[7] | HOU Xin, WANG Yan, WANG Xuan, FAN Wei. Review of Application Progress of Panoramic Imagery in Urban Research [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(7): 1661-1682. |
[8] | HAN Han, HUANG Xunhua, CHANG Huihui, FAN Haoyi, CHEN Peng, CHEN Jijia. Review of Self-supervised Learning Methods in Field of ECG [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(7): 1683-1704. |
[9] | LI Jiancheng, CAO Lu, HE Xiquan, LIAO Junhong. Review of Classification Methods for Lung Nodules in CT Images [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(7): 1705-1724. |
[10] | PU Qiumei, YIN Shuai, LI Zhengmao, ZHAO Lina. Review of U-Net-Based Convolutional Neural Networks for Breast Medical Image Segmentation [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(6): 1383-1403. |
[11] | JIANG Jian, ZHANG Qi, WANG Caiyong. Review of Deep Learning Based Iris Recognition [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(6): 1421-1437. |
[12] | ZHANG Kaili, WANG Anzhi, XIONG Yawei, LIU Yun. Survey of Transformer-Based Single Image Dehazing Methods [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(5): 1182-1196. |
[13] | ZENG Fanzhi, FENG Wenjie, ZHOU Yan. Survey on Natural Scene Text Recognition Methods of Deep Learning [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(5): 1160-1181. |
[14] | YU Fan, ZHANG Jing. Dense Pedestrian Detection Based on Shifted Window Attention Multi-scale Equalization [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(5): 1286-1300. |
[15] | SUN Shuifa, TANG Yongheng, WANG Ben, DONG Fangmin, LI Xiaolong, CAI Jiacheng, WU Yirong. Review of Research on 3D Reconstruction of Dynamic Scenes [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(4): 831-860. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||
/D:/magtech/JO/Jwk3_kxyts/WEB-INF/classes/