[1] CORNEANU C A, MADADI M, ESCALERA S, et al. What does it mean to learn in deep networks? And, how does one detect adversarial attacks?[C]//Proceedings of the 2019 IEEE/ CVF Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 15-16, 2019. Piscataway: IEEE, 2019: 4757-4766.
[2] 张庆林, 杜嘉晨, 徐睿峰. 基于对抗学习的讽刺识别研究[J]. 北京大学学报 (自然科学版), 2019, 55(1): 29-36.
ZAHNG Q L, DU J C, XU R F. Sarcasm detection based on adversarial learning[J]. Acta Scientiarum Naturalium Universitatis Pekinensis, 2019, 55(1): 29-36.
[3] PAPERNOT N, MCDANIEL P, GOODFELLOW I, et al. Practical black-box attacks against deep learning systems using adversarial examples[J]. arXiv:1602.02697, 2016.
[4] XIE C H, ZHANG Z S, ZHOU Y Y, et al. Improving trans-ferability of adversarial examples with input diversity[C]//Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 2730-2739.
[5] 易平, 王科迪, 黄程, 等. 人工智能对抗攻击研究综述[J]. 上海交通大学学报, 2018, 52(10): 1298-1306.
YI P, WANG K D, HUANG C, et al. Adversarial attacks in artificial intelligence: a survey[J]. Journal of Shanghai Jiao Tong University, 2018, 52(10): 1298-1306.
[6] GOODFELLOW I, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[J]. arXiv:1412.6572,2014.
[7] XIE C H, WU Y X, VAN DER MAATEN L, et al. Feature denoising for improving adversarial robustness[C]//Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 501-509.
[8] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Proceedings of the Annual Conference on Neural Information Processing Systems 2014, Montreal, Dec 8-13, 2014. Red Hook: Curran Associates, 2014: 2672-2680.
[9] XIAO C W, LI B, ZHU J Y, et al. Generating adversarial examples with adversarial networks[C]//Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Jul 13-19, 2018. Menlo Park: AAAI, 2018: 3905-3911.
[10] 王曙燕, 金航, 孙家泽. GAN图像对抗样本生成方法[J]. 计算机科学与探索, 2021, 15(4): 702-711.
WANG S Y, JIN H, SUN J Z. Method for image adversarial samples generating based on GAN[J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(4): 702-711.
[11] PAPERNOT N, MCDANIEL P, WU X, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]//Proceedings of the 2016 IEEE Symposium on Security and Privacy, San Jose, May 22-26, 2016. Washington: IEEE Computer Society, 2016: 582-597.
[12] CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]//Proceedings of the 2017 IEEE Symposium on Security and Privacy, San Jose, May 22-24, 2017. Washington: IEEE Computer Society, 2017: 39-57.
[13] AKHTAR N, LIU J, MIAN A. Defense against universal adversarial perturbations[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition,Salt Lake City, Jun 18-23, 2018. Washington: IEEE Computer Society, 2018: 3389-3398.
[14] MOOSAVI-DEZFOOLI S M, FAWZI A, FAWZI O, et al. Universal adversarial perturbations[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 86-94.
[15] KUMARI N, SINGH M, SINHA A, et al. Harnessing the vulnerability of latent layers in adversarially trained models[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, Aug 10-16, 2019: 2779-2785.
[16] LIN T Y, DOLLáR P, GIRSHICK R B, et al. Feature pyramid networks for object detection[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 936-944.
[17] KURAKIN A, GOODFELLOW I, BENGIO S. Adversarial examples in the physical world[J]. arXiv:1607.02533, 2016.
[18] PAPERNOT N, MCDANIEL P, JHA S, et al. The limitations of deep learning in adversarial settings[C]//Proceedings of the 2016 IEEE European Symposium on Security and Privacy, Saarbrücken, Mar 21-24, 2016. Piscataway: IEEE, 2016: 372-387. |