[1] JIAO L C, YANG S Y, HAN J W. Thoughs and prospects of braininspired intelligence[J]. Bulletin of National Natural Science Foundation of China, 2019, 33(6): 646-650.
焦李成, 杨淑媛, 韩军伟. 类脑智能与深度学习的几个问题与思考[J]. 中国科学基金, 2019, 33(6): 646-650.
[2] PAN W W, WANG X Y, SONG M L, et al. Survey on gen-erating adversarial examples[J]. Journal of Software, 2020, 31(1): 67-81.
潘文雯, 王新宇, 宋明黎, 等. 对抗样本生成技术综述[J]. 软件学报, 2020, 31(1): 67-81.
[3] KAHN G, VILLAFLOR A, DING B, et al. Self-supervised deep reinforcement learning with generalized computation graphs for robot navigation[C]//Proceedings of the 2018 IEEE International Conference on Robotics and Automation, Bris-bane, May 21-25, 2018. Piscataway: IEEE, 2018: 5129-5136.
[4] LIU C, CAO Y, LUO Y, et al. Deepfood: deep learning-based food image recognition for computer-aided dietary assess-ment[C]//LNCS 9677: Proceedings of the 14th Interna-tional Conference on Smart Homes and Health Telematics, Wuhan, May 25-27, 2016. Berlin, Heidelberg: Springer, 2016: 37-48.
[5] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[J]. arXiv:1312.6199, 2013.
[6] GAVRILESCU M, VIZIREANU N. Predicting the sixteen personality factors (16PF) of an individual by analyzing facial features[J]. EURASIP Journal on Image and Video Processing, 2017(1): 59.
[7] ZHANG L. Face gender recognition research based on local features and support vector machine[J]. Applied Mechanics & Materials, 2014, 687-691: 3714-3717.
[8] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[J]. arXiv:1412.6572, 2014.
[9] PAPERNOT N, MCDANIEL P, JHA S, et al. The limitations of deep learning in adversarial settings[C]//Proceedings of the 2016 IEEE European Symposium on Security and Privacy, Saarbrücken, Mar 21-24, 2016. Piscataway: IEEE, 2016:372-387.
[10] CARLINI N, WAGNER D. Towards evaluating the robu-stness of neural networks[C]//Proceedings of the 2017 IEEE Symposium on Security and Privacy, San Jose, May 22-26, 2017. Washington: IEEE Computer Society, 2017: 39-57.
[11] NIDHRA S, DONDETI J. Black box and white box testing techniques—a literature review[J]. International Journal of Embedded Systems and Applications, 2012, 2(2): 29-50.
[12] Goodfellow I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Proceedings of the 27th Inter-national Conference on Neural Information Processing Systems, Montreal, Dec 8-13, 2014. Red Hook: Curran Associates, 2014: 2672-2680.
[13] MIRZA M, OSINDERO S. Conditional generative adver-sarial nets[J]. arXiv:1411.1784, 2014.
[14] DENTON E L, CHINTALA S, FERGUS R. Deep generative image models using a Laplacian pyramid of adversarial networks[C]//Proceedings of the Annual Conference on Neural Information Processing Systems, Montreal, Dec 7-12, 2015. Red Hook: Curran Associates, 2015: 1486-1494.
[15] OLKKONEN H, PESOLA P. Gaussian pyramid wavelet transform for multiresolution analysis of images[J]. Graphical Models and Image Processing, 1996, 58(4): 394-398.
[16] BURT P J, ADELSON E H. The Laplacian pyramid as a compact image code[J]. Readings in Computer Vision, 1987, 31(4): 671-679.
[17] RADFORD A, METZ L, CHINTALA S. Unsupervised representation learning with deep convolutional generative adversarial networks[J]. arXiv:1511.06434, 2015.
[18] CHEN X, DUAN Y, HOUTHOOFT R, et al. InfoGAN: interpretable representation learning by information max-imizing generative adversarial nets[C]//Proceedings of the Annual Conference on Neural Information Processing Systems, Barcelona, Dec 5-10, 2016. Red Hook: Curran Associates, 2016: 2172-2180.
[19] LIU E H, HUANG S, GU X, et al. An extension method of deep learning applications test case set based on GAN[C]// Proceedings of the 18th China Fault Tolerant Computing Conference, Beijing, Aug 14-17, 2019: 613-620.
刘二虎, 黄松, 顾雄, 等. 一种基于 GAN的深度学习应用系统测试用例集扩充方法[C]//CFTC2019: 第18届全国容错计算学术会议论文集, 北京, 2019: 613-620.
[20] XIAO C, LI B, ZHU J Y, et al. Generating adversarial examples with adversarial networks[J]. arXiv:1801.02610, 2018.
[21] BUCILUǎ C, CARUANA R, Niculescu-Mizil A. Model compression[C]//Proceedings of the 12th ACM SIGKDD Inter-national Conference on Knowledge Discovery and Data Mining, Philadelphia, Aug 20-23, 2006. New York: ACM, 2006: 535-541.
[22] BA J, CARUANA R. Do deep nets really need to be deep?[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, Dec 8-13, 2014. Red Hook: Curran Associates, 2014: 2654-2662.
[23] HINTON G, VINYALS O, DEAN J. Distilling the knowl-edge in a neural network[J]. arXiv:1503.02531, 2015.
[24] ROMERO A, BALLAS N, KAHOU S E, et al. Fitnets: hints for thin deep nets[J]. arXiv:14126550, 2014.
[25] DONG G, GAO J, DU R, et al. Robustness of network of networks under targeted attack[J]. Physical Review E, 2013, 87(5): 052804.
[26] WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.
[27] ROSEBROCK A. Fingerprinting images for near-duplicate detection[EB/OL]. [2020-02-21]. https://realpython.com/fin-gerprinting-images-for-near-duplicate-detection. |