[1] MI Y X, ZHONG Z Z, HUANG Y G, et al. Privacy-preserving face recognition using trainable feature subtraction[C]//Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2024: 297-307.
[2] WENG X S, IVANOVIC B, WANG Y, et al. PARA-drive: parallelized architecture for real-time autonomous driving[C]//Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2024: 15449-15458.
[3] RANATHUNGA S, LEE E A, PRIFTI SKENDULI M, et al. Neural machine translation for low-resource languages: a survey[J]. ACM Computing Surveys, 2023, 55(11): 1-37.
[4] CHEN C, HU Y C, YANG C H, et al. HyPoradise: an open baseline for generative speech recognition with large language models[C]//Advances in Neural Information Processing Systems 36, 2024.
[5] 武家伟, 孙艳春. 融合知识图谱和深度学习方法的问诊推荐系统[J]. 计算机科学与探索, 2021, 15(8): 1432-1440.
WU J W, SUN Y C. Recommendation system for medical consultation integrating knowledge graph and deep learning methods[J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(8): 1432-1440.
[6] GU T Y, LIU K, DOLAN-GAVITT B, et al. BadNets: evaluating backdooring attacks on deep neural networks[J]. IEEE Access, 2019, 7: 47230-47244.
[7] CHEN X, LIU C, LI B, et al. Targeted backdoor attacks on deep learning systems using data poisoning[EB/OL]. [2024-10-20]. https://arxiv.org/abs/1712.05526.
[8] LI Y Z, LI Y M, WU B Y, et al. Invisible backdoor attack with sample-specific triggers[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 16443-16452.
[9] ZHONG N, QIAN Z X, ZHANG X P. Imperceptible backdoor attack: from input space to feature representation[EB/OL]. [2024-10-20]. https://arxiv.org/abs/2205.03190.
[10] 杨舜, 陆恒杨. 结合扩散模型图像编辑的图文检索后门攻击[J]. 计算机科学与探索, 2024, 18(4): 1068-1082.
YANG S, LU H Y. Image-text retrieval backdoor attack with diffusion-based image-editing[J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(4): 1068-1082.
[11] ZENG Y, PAN M Z, JUST H A, et al. NARCISSUS: a practical clean-label backdoor attack with limited information[C]//Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2023: 771-785.
[12] WANG Y L, HUANG G, SONG S J, et al. Regularizing deep networks with semantic data augmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(7): 3733-3748.
[13] SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localizationC]//Proceedings of the 16th IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 618-626.
[14] BARNI M, KALLAS K, TONDI B. A new backdoor attack in CNNS by training set corruption without label poisoning[C]//Proceedings of the 2019 IEEE International Conference on Image Processing. Piscataway: IEEE, 2019: 101-105.
[15] TURNER A, TSIPRAS D, MADRY A. Label-consistent backdoor attacks[EB/OL]. [2024-10-20]. https://arxiv.org/abs/ 1912.02771.
[16] SAHA A, SUBRAMANYA A, PIRSIAVASH H. Hidden trigger backdoor attacks[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 11957-11965.
[17] SOURI H, GOLDBLUM M, FOWL L H, et al. Sleeper agent: scalable hidden trigger backdoors for neural networks trained from scratch[C]//Advances in Neural Information Processing Systems 35, 2022: 19165-19178.
[18] CHENG S Y, DONG Y P, PANG T Y, et al. Improving black-box adversarial attacks with a transfer-based prior[C]// Advances in Neural Information Processing Systems 32, 2019.
[19] QIN C L, MARTENS J, GOWAL S, et al. Adversarial robustness through local linearization[C]//Advances in Neural Information Processing Systems 32, 2019.
[20] KRIZHEVSKY A, HINTON G. Learning multiple layers of features from tiny images[J]. Handbook of Systemic Autoimmune Diseases, 2009, 1(4): 1-10.
[21] KUMAR N, BERG A C, BELHUMEUR P N, et al. Attribute and simile classifiers for face verification[C]//Proceedings of the 2009 IEEE 12th International Conference on Computer Vision. Piscataway: IEEE, 2009: 365-372.
[22] LE Y, YANG X S. Tiny ImageNet visual recognition challenge [EB/OL]. [2024-10-20]. http://vision.stanford.edu/teaching/cs231n/reports/2015/pdfs/yle_project.pdf.
[23] LIU Z W, LUO P, WANG X G, et al. Deep learning face attributes in the wild[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2015: 3730-3738.
[24] GRIFFIN G, HOLUB A, PERONA P. Caltech-256 object category dataset: technical report 7694[R]. Pasadena: California Institute of Technology, 2007.
[25] WANG G H, MA H, GAO Y S, et al. One-to-multiple clean- label image camouflage (OmClic) based backdoor attack on deep learning[J]. Knowledge-Based Systems, 2024, 288: 111456.
[26] MCINNES L, HEALY J. UMAP: uniform manifold approximation and projection for dimension reduction[EB/OL].[2024-10-20]. https://arxiv.org/abs/1802.03426.
[27] GAO Y S, XU C G, WANG D R, el al. Strip: a defence against trojan attacks on deep neural networks[C]//Proceedings of the 35th Annual Computer Security Applications Conference. Piscataway: IEEE, 2019: 113-125.
[28] WANG B L, YAO Y S, SHAN S, et al. Neural cleanse: identifying and mitigating backdoor attacks in neural networks[C]//Proceedings of the 2019 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2019: 707-723.
[29] LIU K, DOLAN-GAVITT B, GARG S. Fine-pruning: defending against backdooring attacks on deep neural networks [C]//Proceedings of the 21st International Symposium on Research in Attacks, Intrusions, and Defenses. Cham: Springer, 2018: 273-294. |