计算机科学与探索 ›› 2020, Vol. 14 ›› Issue (11): 1930-1942.DOI: 10.3778/j.issn.1673-9418.1912062

• 图形图像 • 上一篇    下一篇

感知相似的图像分类对抗样本生成模型

李俊杰,王茜   

  1. 重庆大学 计算机学院,重庆 400044
  • 出版日期:2020-11-01 发布日期:2020-11-09

Perceptually Similar Image Classification Adversarial Example Generation Model

LI Junjie, WANG Qian   

  1. College of Computer Science, Chongqing University, Chongqing 400044, China
  • Online:2020-11-01 Published:2020-11-09

摘要:

现有基于生成器的对抗样本生成模型相比基于迭代修改原图的算法可有效降低对抗样本的构造时间,但其生成的对抗样本与原图在感知上具有明显差异,人眼易察觉。该模型旨在增加对抗样本与原图在人眼观察感知上的相似性,并保证攻击成功率。模型将对抗样本生成的过程视为对原图进行图像增强的操作,引入生成对抗网络,并改进感知损失函数以增加对抗样本与原图在内容与特征空间上的相似性,采用多分类器损失函数优化训练从而提高攻击效率。实验结果表明,相比其他基于生成器的对抗样本生成模型,该模型有效提高了对抗样本与原图的结构相似性指标,并且攻击成功率未出现下降。说明在保持攻击成功率的同时,该模型可有效提高人眼观察下对抗样本与原图的相似性。

关键词: 对抗攻击, 生成对抗网络(GAN), 感知内容损失, 对抗样本, 深度神经网络(DNN)

Abstract:

The existing generator-based adversarial example generation model can effectively reduce the construction time of an adversarial example compared to the algorithms based on iterative original image modification, but the obvious differences between generated adversarial example and original image are noticeable in human perception. This model aims to increase the similarity between the adversarial example and the original image in human per-ception, while maintaining the fooling ratio. The model considers adversarial example generation process as image enhancement to the original image, introduces generative adversarial network, and improves perceptual loss func-tion to increase the similarity between adversarial example and original image in content and latent space. It also uses multi-classifier loss function to train the generator so that it can improve attack efficiency. The experimental results show that compared with other generator-based models, this model effectively improves the structural simi-larity index between the adversarial example and the original one, and the fooling ratio does not decrease. It shows that while maintaining the fooling ratio, this model can effectively improve the similarity between adversarial exa-mple and original image in human perception.

Key words: adversarial attack, generative adversarial networks (GAN), perceptual loss, adversarial example, deep neural networks (DNN)