[1] Liu G X, Liu S Y, Wu J F, et al. Machine vision object detec-tion algorithm based on deep learning and application in bank-note detection[J]. China Measurement & Test, 2019, 45(5): 1-9.刘桂雄, 刘思洋, 吴俊芳, 等. 基于深度学习的机器视觉目标检测算法及在票据检测中应用[J]. 中国测试, 2019, 45(5): 1-9.
[2] Nanjiagelie, Li R, Wang H X, et al. Ultrasound image classi-fication of hepatic echinococcosis using deep learning[J].Journal of Shenzhen University (Science and Engineering),2019, 36(6): 702-708.南嘉格列, 李锐, 王海霞, 等. 基于深度学习的肝包虫病超声图像分型研究[J]. 深圳大学学报(理工版), 2019, 36(6):702-708.
[3] Harikrishnan J, Sudarsan A, Sadashiv A, et al. Vision-face recognition attendance monitoring system for surveillance using deep learning technology and computer vision[C]//Pro-ceedings of the 2019 International Conference on Vision To-wards Emerging Trends in Communication and Networking, Vellore, March 30-31, 2019. Piscataway: IEEE, 2019: 1-5.
[4] Sampedro C, Rodriguez-Vazquez J, Rodriguez-Ramos A, et al. Deep learning-based system for automatic recognition and diagnosis of electrical insulator strings[J]. IEEE Access, 2017, 7: 101283-101308.
[5] Zhang L, Yuan F N, Zhang W R, et al. Review of fully con-volutional neural network[J]. Computer Engineering and App-lications, 2020, 56(1): 25-37.章琳, 袁非牛, 张文睿, 等.?全卷积神经网络研究综述[J].计算机工程与应用,2020,56(1):25-37.
[6] Li H, Wan X X. Image style transfer algorithm under deep con-volutional neural network[J]. Computer Engineering and App-lications, 2020, 56(2): 176-183.李慧, 万晓霞.?深度卷积神经网络下的图像风格迁移算法[J]. 计算机工程与应用, 2020,56(2):176-183.
[7] Mao B C, Chen S K, Xie Y, et al. Exploration of classical deep learning algorithm in intelligent classification of Chinese randomized controlled trials[J]. Chinese Journal of Evidence-Based Medicine, 2019, 19(11): 1262-1267.毛渤淳, 陈圣恺, 谢雨, 等. 经典深度学习算法对中文随机对照试验智能判别应用[J]. 中国循证医学杂志, 2019, 19(11): 1262-1267.
[8] Wang X, Li C, Chen J. Dilated convolution neural networks for Chinese word segmentation[J]. Journal of Chinese Infor-mation Processing, 2019, 33(9): 24-30.王星, 李超, 陈吉. 基于膨胀卷积神经网络模型的中文分词方法[J]. 中文信息学报, 2019, 33(9): 24-30.
[9] Jaf S, Calder C. Deep learning for natural language parsing[J]. IEEE Access, 2019, 7: 131363-131373.
[10] Mahmud M, Shamim Kaiser M, Hussain A, et al. Appli-cations of deep learning and reinforcement learning to biolo-gical data[J]. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(6): 2063-2079.
[11] Feng M C, Zheng J B, Ren J C, et al. Big data analytics and mining for effective visualization and trends forecasting of crime data[J]. IEEE Access, 2019, 7: 106111-106123.
[12] Ye Z Y, Feng A M, Gao H. Customer purchasing power prediction of Google store based on deep LightGBM ense-mble learning model[J]. Journal of Computer Applications,2019, 39(12): 3434-3439.叶志宇, 冯爱民, 高航. 基于深度LightGBM集成学习模型 的谷歌商店顾客购买力预测[J]. 计算机应用, 2019, 39(12): 3434-3439.
[13] Morgado P, Vasconcelos N. Semantically consistent regula-rization for zero-shot recognition[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recog-nition, Honolulu, Jul 21-26, 2017. Washington: IEEE Com-puter Society, 2017: 2037-2046.
[14] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv:1409.1556, 2014.
[15] Szegedy C, Liu W, Jia Y Q, et al. Going deeper with convolu-tions[C]//Proceedings of the 2015 IEEE Conference on Com-puter Vision and Pattern Recognition, Boston, Jun 7-12, 2015. Piscataway: IEEE, 2015: 1-9.
[16] Wang K J, Zhao Y D, Xing X L. Deep learning in driverless vehicles[J]. CAAI Transactions on Intelligent Systems, 2018, 13(1): 55-69.王科俊, 赵彦东, 邢向磊. 深度学习在无人驾驶汽车领域应用的研究进展[J]. 智能系统学报, 2018, 13(1): 55-69.
[17] Lai J, Rao R. Application of deep reinforcement learning in indoor UAV target search[J]. Computer Engineering and App-lications, 2020, 56(17): 156-160.赖俊, 饶瑞. 深度强化学习在室内无人机目标搜索中的应用[J]. 计算机工程与应用, 2020, 56(17): 156-160.
[18] Kwon M, Ju M, Choi S. Classification of various daily be-haviors using deep learning and smart watch[C]//Proceed-ings of the 2017 International Conference on Ubiquitous and Future Networks, Milan, Jul 4-7, 2017. Piscataway: IEEE, 2017: 735-740.
[19] Khan S, Javed M H, Ahmed E, et al. Facial recognition using convolutional neural networks and implementation on smart glasses[C]//Proceedings of the 2019 International Conference on Information Science and Communication Technology, Karachi, Mar 9-10, 2019. Piscataway: IEEE, 2019: 1-6.
[20] Chen C J, Chen K C, Martin-Kuo M. Acceleration of neural network model execution on embedded systems[C]//Proceed-ings of the 2018 International Symposium on VLSI Design, Automation and Test, Hsinchu, China, Apr 16-19, 2018. Piscat-away: IEEE, 2018: 1-3.
[21] Wu J, Qian X Z. Compact deep convolutional neural network in image recognition[J]. Journal of Frontiers of Computer Science and Technology, 2019, 13(2): 275-284.吴进, 钱雪忠.紧凑型深度卷积神经网络在图像识别中的应用[J].计算机科学与探索,2019,13(2):275-284.
[22] Li Q H, Li C P, Zhang J, et al. Survey of compressed deep neural network[J]. Computer Science, 2019, 46(9): 1-14.李青华, 李翠平, 张静, 等. 深度神经网络压缩综述[J]. 计算机科学, 2019, 46(9): 1-14.
[23] Ji R R, Lin S H, Chao F, et al. Deep neural network com-pression and acceleration: a review[J]. Journal of Computer Research and Development, 2018, 55(9): 1871-1888.纪荣嵘, 林绍辉, 晁飞, 等. 深度神经网络压缩与加速综述[J]. 计算机研究与发展, 2018, 55(9): 1871-1888.
[24] Han S, Mao H, Dally W J. Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding[J]. arXiv:1510.00149, 2015.
[25] Huizi M, Han S, Pool J, et al. Exploring the regularity of sparse structure in convolutional neural networks[J]. arXiv:1705.08922, 2017.
[26] Wen W, Wu C, Wang Y, et al. Learning structured sparsity in deep neural networks[C]//Proceedings of the Advances in Neural Information Processing Systems. Berlin: Springer,2016: 2074-2082.
[27] Liu Z, Li J G, Shen Z Q, et al. Learning efficient convolu-tional networks through network slimming[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Oct 22-29, 2017. Washington: IEEE Computer Society, 2017: 2755-2763.
[28] Luo J H, Wu J X, Lin W Y. ThiNet: a filter level pruning method for deep neural network compression[C]//Proceed-ings of the 2017 IEEE International Conference on Computer Vision, Venice, Oct 22-29, 2017. Washington: IEEE Computer Society, 2017: 5068-5076.
[29] Zhang X, Zou J, He K, et al. Accelerating very deep convolu-tional networks for classification and detection[J]. IEEE Trans-actions on Pattern Analysis and Machine Intelligence, 2016, 38(10): 1943-1955.
[30] Lebedev V, Ganin Y, Rakhuba M, et al. Speeding-up convolu-tional neural networks using fine-tuned CP-decomposition[EB/OL]. [2019-10-09]. https://arxiv.org/pdf/1412.6553.pdf.
[31] Wu J X, Leng C, Wang Y H, et al. Quantized convolutional neural networks for mobile devices[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recog-nition, Las Vegas, Jun 27-30, 2016. Washington: IEEE Com-puter Society, 2016: 4820-4828.
[32] Hu Q H, Wang P S, Cheng J. From Hashing to CNNs: training binary weight networks via Hashing[J]. arXiv:1802.02733,2018.
[33] Zagoruyko S, Komodakis N. Paying more attention to atten-tion: improving the performance of convolutional neural networks via attention transfer[J]. arXiv:1612.03928, 2016.
[34] Li M, Chen Q, Yan S C. Network in network[J]. arXiv:1312.4400, 2013.
[35] Iandola F N,Han S,Moskewicz M W, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size[J]. arXiv:1602.07360v4, 2016.
[36] Howard A G, Zhu M L, Chen B, et al. MobileNets:efficient convolutional neural networks for mobile vision applications[J]. arXiv:1704.04861, 2017.
[37] Sandler M, Haward A, Zhu M L, et al. MobileNetV2:inverted residuals and linear bottlenecks[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Washing-ton: IEEE Computer Society, 2018: 4510-4520.
[38] Zhang X Y, Zhou X Y, Lin M X, et al. ShuffleNet: an extre-mely efficient convolutional neural network for mobile devices[C]//Proceedings of the 2018 IEEE/CVF Conference on Com-puter Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Washington: IEEE Computer Society, 2018: 6848-6856.
[39] Ma N N, Zhang X Y, Zheng H T, et al. ShuffleNet V2: practical guidelines for efficient CNN architecture design[C]//LNCS 11218: Proceedings of the 15th European Conference on Com-puter Vision, Munich, Sep 8-14, 2018. Cham: Springer Inter-national Publishing, 2018: 122-138.
[40] Mehta S, Rastegari M, Shapiro L, et al. ESPNetv2: a light-weight, power efficient, and general purpose convolutional neural network[J]. arXiv:1811.11431, 2018.
[41] Metha S, Rastegari M, Caspi A, et al. Espnet: efficient spatial pyramid of dilated convolutions for semantic segmentation[C]//LNCS 11214: Proceedings of the 15th European Con-ference on Computer Vision, Munich, Sep 8-14, 2018. Cham: Springer International Publishing, 2018: 561-580.
[42] Wu B C, Wan A, Yue X Y, et al. Shift: a zero FLOP, zero para-meter alternative to spatial convolutions[J]. arXiv:1711.08141, 2017.
[43] Jeon Y H, Kim J M. Constructing fast network through deconstruction of convolution[J]. arXiv:1860.07370, 2018.
[44] Chen W J, Xie D, Zhang Y, et al. You need is a few shifts: designing efficient convolutional neural networks for image classification[J]. arXiv:1903.05285, 2019.
[45] Yu F, Koltun V, Funkhouser T. Dilated residual networks[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 636-644.
[46] He K M, Zhang X Y, Ren S Q. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Con-ference on Computer Vision and Pattern Recognition, Las Vegas, Jun 27-30, 2016. Washington: IEEE Computer Society, 2016: 770-778.
[47] Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift[J]. arXiv:1502.03167, 2015.
[48] Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. The Journal of Machine Learning Research, 2014, 15(1): 1929-1958.
[49] Wang P Q, Chen P F, Yuan Y, et al. Understanding convolu-tion for semantic segmentation[C]//Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision, Lake Tahoe, Mar 12-15, 2018. Washington: IEEE Computer Society, 2018: 1451-1460.
[50] Hamaguchi R, Fujita A, Nemoto K, et al. Effective use of dilated convolutions for segmenting small object instances in remote sensing imagery[J]. arXiv:1709.00179, 2017.
[51] He K M, Zhang X Y, Ren S Q, et al. Delving deep into recti-fiers: surpassing human-level performance on ImageNet classi-fication[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision, Santiago, Dec 7-13, 2015. Washington: IEEE Computer Society, 2015: 1026-1034.
[52] Hu J, Shen L, Albanie S, et al. Sqyeeze-and-excitation net-works[J]. arXiv:1709.01507, 2017.
[53] Cai H, Zhu L G, Han S. ProxylessNAS: direct neural architec-ture search on target task and hardware[J]. arXiv:1812.00332, 2018.
[54] Tan M X, Chen B, Pang R M, et al. MnasNet: platform-aware neural architecture search for mobile[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 15-20, 2019. Washing-ton: IEEE Computer Society, 2019: 2815-2823.
[55] Wu B C, Dai X L, Zhang P Z, et al. FBNet: hardware-aware efficient ConvNet design via differentiable neural architecture search[J]. arXiv:1812.03443, 2018.
[56] Elsken T, Metzen J H, Hutter F. Neural architecture search: a survey[J]. arXiv:1808.05377, 2018. |