Journal of Frontiers of Computer Science and Technology ›› 2024, Vol. 18 ›› Issue (12): 3080-3099.DOI: 10.3778/j.issn.1673-9418.2404001

• Frontiers·Surveys • Previous Articles     Next Articles

Review of Research on Adversarial Attack in Three Kinds of Images

XU Yuhui, PAN Zhisong, XU Kun   

  1. Command and Control Engineering College, Army Engineering University of PLA, Nanjing 210000, China
  • Online:2024-12-01 Published:2024-11-29

面向三种形态图像的对抗攻击研究综述

徐宇晖,潘志松,徐堃   

  1. 中国人民解放军陆军工程大学 指挥控制工程学院,南京 210000

Abstract: In recent years, there have been numerous breakthroughs in deep learning, leading to the expansion of applications based on deep learning into a wide range of fields. However, due to the vulnerability of deep neural networks, they are highly susceptible to threats from adversarial samples, posing significant security challenges in their application. As a result, adversarial attack has been a hot research area. Since deep neural networks are widely used in image tasks, research on adversarial attacks in the image field is a key to enhancing security, and a lot of research from different perspectives has been carried out. Existing studies on image attacks can mainly be categorized into three forms: visible light images, infrared images, and synthetic aperture radar (SAR) images. Firstly, this paper introduces the basic concepts and adversarial sample terms related to image adversarial samples, and then summarizes the adversarial attack methods for three types of images according to their attack ideas. Meanwhile, the attack success rate (ASR), memory size, and applicable scenarios of the attack methods for three types of images are compared and analyzed. At the same time, a brief introduction is made to the defense strategy research in the field of image adversarial samples, mainly summarizing three existing defense methods. Finally, the current status of image adversarial samples is analyzed, the possible research directions of adversarial attacks in the future image field are prospected, the potential problems that may be encountered in the future are summarized, and corresponding solutions are provided.

Key words: deep learning, adversarial attack, visible light image, infrared image, synthetic aperture radar (SAR) image, adversarial examples

摘要: 深度学习近年来取得了大量突破性进展,基于深度学习的应用也扩展到了越来越多的领域,但由于深度神经网络的脆弱性,在应用过程中极易受到来自对抗样本的威胁,给应用带来了巨大安全问题,因此对抗攻击一直是研究的热门领域。由于深度神经网络在图像任务中被广泛应用,针对图像领域的对抗攻击研究是增强安全性的关键,学界从不同角度对此展开了大量研究。现有的图像攻击研究主要可以分为可见光图像、红外图像以及合成孔径雷达(SAR)图像三种形态的图像攻击。介绍了图像对抗样本的基本概念及对抗样本术语,对三种形态图像的对抗攻击方法根据其攻击思想进行分类总结,并对三种形态图像的攻击方法的攻击成功率(ASR)、所占内存大小、适用场景等进行对比分析。针对目前图像对抗样本领域的防御策略研究做出简要的介绍,主要针对目前已有的三类防御方法进行总结。对现有的针对图像对抗样本的研究现状进行了分析,对未来图像领域中的对抗攻击可能研究方向进行了展望,对未来可能面临的四种问题进行了总结并给出对应的解决方案。

关键词: 深度学习, 对抗攻击, 可见光图像, 红外图像, 合成孔径雷达(SAR)图像, 对抗样本