[1] 刘瑞祺, 李虎, 王东霞, 等. 图像对抗样本防御技术研究综述[J]. 计算机科学与探索, 2023, 17(12): 2827-2839.
LIU R Q, LI H, WANG D X, et al. Survey of image adversarial example defense techniques[J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(12): 2827-2839.
[2] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[EB/OL]. [2024-12-04]. https://arxiv.org/abs/1312.6199.
[3] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[EB/OL]. [2024-12-04]. https://arxiv.org/abs/1412.6572.
[4] CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]//Proceedings of the 2017 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2017: 39-57.
[5] QIU H N, XIAO C W, YANG L, et al. SemanticAdv: generating adversarial examples via attribute-conditional image editing[EB/OL]. [2024-12-05]. https://arxiv.org/abs/1906.07927.
[6] MUSTAFA A, KHAN S H, HAYAT M, et al. Image super-resolution as a defense against adversarial attacks[J]. IEEE Transactions on Image Processing, 2020, 29: 1711-1724.
[7] WANG H, DENG Y F, YOO S, et al. AGKD-BML: defense against adversarial attack by attention guided knowledge distillation and bi-directional metric learning[EB/OL]. [2024-12-05]. https://arxiv.org/abs/2108.06017.
[8] SONG C B, FAN Y B, ZHOU A Y, et al. Regional adversarial training for better robust generalization[J]. International Journal of Computer Vision, 2024, 132(10): 4510-4520.
[9] ZHANG S D, GAO H C, RAO Q X. Defense against adversarial attacks by reconstructing images[J]. IEEE Transactions on Image Processing, 2021, 30: 6117-6129.
[10] GUO S S, LI X Y, ZHU P C, et al. ADS-detector: an attention-based dual stream adversarial example detection method[J]. Knowledge-Based Systems, 2023, 265: 110388.
[11] ABUSNAINA A, WU Y H, ARORA S, et al. Adversarial example detection using latent neighborhood graph[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 7667-7676.
[12] MU H, YANG X, PENG A J, et al. Detecting adversarial examples via an orthogonal knowledge-distillation-based approach[C]//Proceedings of the 7th Global Intelligent Industry Conference, 2024: 13278.
[13] MA C, ZHAO C X, SHI H L, et al. MetaAdvDet: towards robust detection of evolving adversarial attacks[C]//Proceedings of the 27th ACM International Conference on Multimedia. New York: ACM, 2019: 692-701.
[14] KURAKIN A, GOODFELLOW I, BENGIO S. Adversarial machine learning at scale[EB/OL]. [2024-12-06]. https://arxiv.org/abs/1611.01236.
[15] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[EB/OL]. [2024-12-06]. https://arxiv.org/abs/1706.06083.
[16] PAPERNOT N, MCDANIEL P, GOODFELLOW I. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples[EB/OL]. [2024-12-06]. https://arxiv.org/abs/1605.07277.
[17] WU T, LUO T, WUNSCH D C. Black-box attack using adversarial examples: a new method of improving transferability[J]. World Scientific Annual Review of Artificial Intelligence, 2023, 1: 2250005.
[18] CHEN P Y, ZHANG H, SHARMA Y, et al. ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models[C]//Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. New York: ACM, 2017: 15-26.
[19] BAI Y, WANG Y S, ZENG Y Y, et al. Query efficient black-box adversarial attack on deep neural networks[J]. Pattern Recognition, 2023, 133: 109037.
[20] TRAMèR F, KURAKIN A, PAPERNOT N, et al. Ensemble adversarial training: attacks and defenses[EB/OL]. [2024-12-06]. https://arxiv.org/abs/1705.07204.
[21] JIA X J, ZHANG Y, WU B Y, et al. LAS-AT: adversarial training with learnable attack strategy[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 13388-13398.
[22] SHAO J H, GENG S J, FU Z J, et al. CardioDefense: defending against adversarial attack in ECG classification with adversarial distillation training[J]. Biomedical Signal Processing and Control, 2024, 91: 105922.
[23] BUCKMAN J, ROY A, RAFFEL C, et al. Thermometer encoding: one hot way to resist adversarial examples[C]//Proceedings of the 6th International Conference on Learning Representations, 2018.
[24] COHEN G, SAPIRO G, GIRYES R. Detecting adversarial samples using influence functions and nearest neighbors[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 14441-14450.
[25] LI Q, CHEN J, HE K, et al. Model-agnostic adversarial example detection via high-frequency amplification[J]. Computers & Security, 2024, 141: 103791.
[26] XU W L, EVANS D, QI Y J. Feature squeezing: detecting adversarial examples in deep neural networks[C]//Proceedings of the 2018 Network and Distributed System Security Symposium, 2018.
[27] SNELL J, SWERSKY K, ZEMEL R S. Prototypical networks for few-shot learning[EB/OL]. [2024-12-06]. https://arxiv.org/abs/1703.05175.
[28] HE J, HONG R C, LIU X L, et al. Revisiting local descriptor for improved few-shot classification[J]. ACM Transactions on Multimedia Computing, Communications, and Applications, 2022, 18: 1-23.
[29] FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]//Proceedings of the 34th International Conference on Machine Learning, 2017: 1126-1135.
[30] LI Z G, ZHOU F W, CHEN F, et al. Meta-SGD: learning to learn quickly for few-shot learning[EB/OL]. [2024-12-05]. https://arxiv.org/abs/1707.09835.
[31] JIANG W, KWOK J, ZHANG Y. Subspace learning for effective meta-learning[C]//Proceedings of the 39th International Conference on Machine Learning, 2022: 10177-10194.
[32] LEE J J, YOON S W. XB-MAML: learning expandable basis parameters for effective meta-learning with wide task coverage[EB/OL]. [2024-12-05]. https://arxiv.org/abs/2403.06768.
[33] LU C M, WANG X F, YANG A M, et al. A few-shot-based model-agnostic meta-learning for intrusion detection in security of Internet of things[J]. IEEE Internet of Things Journal, 2023, 10(24): 21309-21321.
[34] LIU W Z, ZHANG W L, YANG K W, et al. Enhancing generalization in few-shot learning for detecting unknown adversarial examples[J]. Neural Processing Letters, 2024, 56(2): 85.
[35] ZHOU Y, HU X F, HAN J Q, et al. High frequency patterns play a key role in the generation of adversarial examples[J]. Neurocomputing, 2021, 459: 131-141.
[36] JUNG S, CHUNG M, SHIN Y G. Adversarial example detection by predicting adversarial noise in the frequency domain[J]. Multimedia Tools and Applications, 2023, 82(16): 25235-25251.
[37] KIM M, YUN J. AEGuard: image feature-based independent adversarial example detection model[J]. Security and Communication Networks, 2022(1): 3440123.
[38] HU J, SHEN L, ALBANIE S, et al. Squeeze-and-excitation networks[EB/OL]. [2024-12-05]. https://arxiv.org/abs/1709.01507.
[39] DONG Y P, LIAO F Z, PANG T Y, et al. Boosting adversarial attacks with momentum[EB/OL]. [2024-12-05]. https://arxiv.org/abs/1710.06081.
[40] KURAKIN A, GOODFELLOW I, BENGIO S. Adversarial examples in the physical world[EB/OL]. [2024-12-04]. https://arxiv.org/abs/1607.02533.
[41] PAPERNOT N, MCDANIEL P, JHA S, et al. The limitations of deep learning in adversarial settings[EB/OL]. [2024-12-04]. https://arxiv.org/abs/1511.07528.
[42] CHEN P Y, SHARMA Y, ZHANG H, et al. EAD: elastic-net attacks to deep neural networks via adversarial examples[EB/OL]. [2024-12-04]. https://arxiv.org/abs/1709.04114.
[43] UESATO J, O’DONOGHUE B, VAN DEN OORD A, et al. Adversarial risk and the dangers of evaluating against weak attacks[EB/OL]. [2024-12-04]. https://arxiv.org/abs/1802.05666.
[44] XIAO C W, ZHU J Y, LI B, et al. Spatially transformed adversarial examples[EB/OL]. [2024-12-06]. https://arxiv.org/abs/ 1801.02612.
[45] MIYATO T, MAEDA S I, KOYAMA M, et al. Distributional smoothing with virtual adversarial training[EB/OL]. [2024-12-06]. https://arxiv.org/abs/1507.00677.
[46] HOSSEINI H, XIAO B C, JAISWAL M, et al. On the limitation of convolutional neural networks in recognizing negative images[C]//Proceedings of the 2017 16th IEEE International Conference on Machine Learning and Applications. Piscataway: IEEE, 2017: 352-358.
[47] GOODFELLOW I, QIN Y, BERTHELOT D. Evaluation methodology for attacks against confidence thresholding models[EB/OL]. [2024-12-06]. https://openreview.net/forum?id=H1g0piA9tQ.
[48] MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. DeepFool: a simple and accurate method to fool deep neural networks[EB/OL]. [2024-12-06]. https://arxiv.org/abs/1511.04599.
[49] JANG U, WU X, JHA S. Objective metrics and gradient descent algorithms for adversarial examples in machine learning[C]//Proceedings of the 33rd Annual Computer Security Applications Conference. New York: ACM, 2017: 262-277.
[50] VINYALS O, BLUNDELL C, LILLICRAP T, et al. Matching networks for one shot learning[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016: 3637-3645.
[51] VACANTI G, VAN LOOVEREN A. Adversarial detection and correction by matching prediction distributions[EB/OL]. [2024-12-06]. https://arxiv.org/abs/2002.09364. |