Journal of Frontiers of Computer Science and Technology ›› 2023, Vol. 17 ›› Issue (12): 2827-2839.DOI: 10.3778/j.issn.1673-9418.2303080

• Frontiers·Surveys • Previous Articles     Next Articles

Survey of Image Adversarial Example Defense Techniques

LIU Ruiqi, LI Hu, WANG Dongxia, ZHAO Chongyang, LI Boyu   

  1. National Key Laboratory of Science and Technology on Information System Security, Institute of System and Engineering, Academy of Military Sciences, Beijing 100101, China
  • Online:2023-12-01 Published:2023-12-01

图像对抗样本防御技术研究综述

刘瑞祺,李虎,王东霞,赵重阳,李博宇   

  1. 军事科学院 系统工程研究院 信息系统安全技术重点实验室,北京 100101

Abstract: The rapid and extensive growth of artificial intelligence introduces new security challenges. The generation and defense of adversarial examples for deep neural networks is one of the hot spots. Deep neural networks are  most widely used in the field of images and most easily cheated by image adversarial examples. The research on the defense techniques for image adversarial examples is an important tool to improve the security of AI applications. There is no standard explanation for the existence of image adversarial examples, but it can be observed and understood from different dimensions, which can provide insights for proposing targeted defense approaches. This paper sorts out and analyzes current mainstream hypotheses of the reason for the existence of adversarial examples, such as the blind spot hypothesis, linear hypothesis, decision boundary hypothesis, and feature hypothesis, and the correlations between various hypotheses and typical adversarial example generation methods. Based on this, this paper summarizes the image adversarial example defense techniques in two dimensions, model-based and data-based, and compares and analyzes the adaptation scenarios, advantages and disadvantages of different technical methods. Most of the existing image adversarial example defense techniques are aimed at defending against specific adversarial example generation methods, and there is no universal defense theory and method yet. In the real application, it needs to consider the specific application scenarios, potential security risks and other factors, optimize and combine the configuration in the existing defense methods. Future researchers can deepen their technical research in terms of generalized defense theory, evaluation of defense effectiveness, and systematic protection strategies.

Key words: adversarial examples, artificial intelligence security, adversarial defense

摘要: 人工智能的快速发展和广泛应用带来了新的安全性问题,针对深度神经网络的对抗样本生成与防御是其中的热点之一。深度神经网络在图像领域应用最广也最容易被图像对抗样本欺骗,针对图像对抗样本的防御技术研究是提升人工智能应用安全的重要手段。图像对抗样本的存在原因尚无统一解释,但可从不同维度加以观察与理解,从而为提出针对性的防御技术方法提供启示。对当前主流的盲区假说、线性假说、决策边界假说、特征假说等对抗样本存在原因假说,以及各种假说与典型对抗样本生成方法之间的关联关系进行了梳理分析。以此为基础,从基于模型和基于数据两个维度对图像对抗样本防御技术进行了总结归纳,对比分析了不同技术方法的适应场景与优缺点。现有的图像对抗样本防御技术方法大多针对具体的对抗样本生成方法进行防御,尚无统一的防御理论与方法。现实应用中需综合考虑具体的应用场景、潜在的安全风险等,在现有的防御技术方法中进行优化组合配置。后续可从泛化防御理论、防御效果评价、体系化防护策略等方面深化技术研究。

关键词: 对抗样本, 人工智能安全, 对抗防御