[1] ZHANG S, GONG Y H, WANG J J. The development of deep convolution neural network and its applications on com-puter vision[J]. Journal of Computers, 2019, 42(3): 453-482.
张顺, 龚怡宏, 王进军. 深度卷积神经网络的发展及其在计算机视觉领域的应用[J]. 计算机学报, 2019, 42(3): 453-482.
[2] ZHOU F Y, JIN L P, DONG J. Review of convolutional neural network[J]. Journal of Computers, 2017, 40(6): 1229-1251.
周飞燕, 金林鹏, 董军. 卷积神经网络研究综述[J]. 计算机学报, 2017, 40(6): 1229-1251.
[3] CHAPELLE O, SCHOLKOPF B, ZIEN A. Semi-supervised learning[M]. Cambridge: MIT Press, 2006: 34-36.
[4] MEY A, LOOG M. Improvability through semi-supervised learning: a survey of theoretical results[J]. arXiv:1908.09574, 2019.
[5] XU J, HENRIQUES J F, VEDALDI A. Invariant information clustering for unsupervised image classification and segmen-tation[J]. arXiv:1807.06653, 2018.
[6] ZHOU Z H. Machine learning[M]. Beijing: Tsinghua Univer-sity Press, 2018: 311-312.
周志华. 机器学习[M]. 北京: 清华大学出版社, 2018: 311-312.
[7] LIU G H, ZHANG X B. A method for personal identification of communication radiation source based on deep belief net-work[J]. Chinese Journal of Radio Science, 2020, 35(3): 395-403.
刘高辉, 张晓博. 一种基于深度置信网络的通信辐射源个体识别方法[J]. 电波科学学报, 2020, 35(3): 395-403.
[8] HAN S, HAN Q H. Review of semi-supervised learning research[J]. Computer Engineering and Applications, 2020, 56(6): 19-27.
韩嵩, 韩秋弘. 半监督学习研究的述评[J]. 计算机工程与应用, 2020, 56(6): 19-27.
[9] LI H. Statistical learning methods[M]. Beijing: Tsinghua University Press, 2019: 84-88.
李航. 统计学习方法[M]. 北京: 清华大学出版社, 2019: 84-88.
[10] QIU X P. Neural network and deep learning[M]. Beijing: China Machine Press, 2020: 124-129.
邱锡鹏. 神经网络与深度学习[M]. 北京: 机械工业出版社, 2020: 124-129.
[11] CASCANTE-BONILLA P, TAN F W, QI Y J, et al. Curri-culum labeling: self-paced pseudo-labeling for semi-supervised learning[J]. arXiv:2001.06001, 2020.
[12] ZHOU Z H, WANG W, GAO W, et al. Introduction to ma-chine learning theory[M]. Beijing: China Machine Press, 2020.
周志华, 王魏, 高尉, 等. 机器学习理论导引[M]. 北京: 机械工业出版社, 2020.
[13] GRANDVALET Y, BENGIO Y. Semi-supervised learning by entropy minimization[C]//Proceedings of the Advances in Neural Information Processing Systems, Vancouver, Dec 13-18, 2004. Red Hook: Curran Associates, 2004: 529-536.
[14] LEE D. Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks[C]//Proceedings of the Workshop on Challenges in Representation Learning, Jun 21, 2013: 8-17.
[15] CAI L M, WANG L J. Digital image processing[M]. Beijing: Tsinghua University Press, 2019.
蔡利梅, 王利娟. 数字图像处理[M]. 北京: 清华大学出版社, 2019.
[16] ZHANG X F, WU G. Data augmentation method based on generative adversarial network[J]. Computer Systems & App-lications, 2019, 28(10): 201-206.
张晓峰, 吴刚. 基于生成对抗网络的数据增强方法[J]. 计算机系统应用, 2019, 28(10): 201-206.
[17] CUBUK E D, ZOPH B, MANE D, et al. AutoAugment: learning augmentation strategies from data[C]//Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 113-123.
[18] CUBUK E D, ZOPH B, SHLENS J, et al. RandAugment: practical automated data augmentation with a reduced search space[J]. arXiv:1909.13719, 2019.
[19] ZHANG H, CISSE M, DAUPHIN Y, et al. mixup: beyond-empirical risk minimization[J]. arXiv:1710.09412, 2018.
[20] BLUM A, MITCHELL T M. Combining labeled and unla-beled data with co-training[C]//Proceedings of the 11th Annual Conference on Computational Learning Theory, Madison, Jul 24-26, 1998. New York: ACM, 1998: 92-100.
[21] QIAO S, SHEN W, ZHANG Z S, et al. Deep co-training for semi-supervised image recognition[J]. arXiv:1803.05984, 2018.
[22] CHEN D D, WANG W, GAO W, et al. Tri-net for semi-supervised deep learning[C]//Proceedings of the 27th Inter-national Joint Conference on Artificial Intelligence, Stock-holm, Jul 13-19, 2018: 2014-2020.
[23] CIRESAN D, MEIER U, GAMBARDELLA L, et al. Deep, big, simple neural nets for handwritten digit recognition[J]. Neural Computation, 2010, 22(12): 3207-3220.
[24] LAINE S, AILA T. Temporal ensembling for semisupervised learning[J]. arXiv:1610.02242, 2016.
[25] OUALI Y, HUDELOT C, TAMI M. An overview of deep semi-supervised learning[J]. arXiv:2006.05278, 2020.
[26] TARVAINEN A, VALPOLA H. Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results[J]. arXiv:1703.01780, 2018.
[27] XIE Q Z, DAI Z H, HOVY E H, et al. Unsupervised data augmentation for consistency training[J]. arXiv:1904.12848, 2019.
[28] BERTHELOT D, CARLINI N, GOODFELLOW I, et al. MixMatch: a holistic approach to semi-supervised learning[J]. arXiv:1905.02249, 2019.
[29] SOHN K, BERTHELOT D, LI C L, et al. FixMatch: sim-plifying semi-supervised learning with consistency and con-fidence[J]. arXiv:2001.07685, 2020.
[30] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J]. arXiv:1406.2661, 2014.
[31] ZOU X F, ZHU D J. Review on generative adversarial net-work[J]. Computer Systems & Applications, 2019, 28(11): 1-9.
邹秀芳, 朱定局. 生成对抗网络研究综述[J]. 计算机系统应用, 2019, 28(11): 1-9.
[32] SALIMANS T, GOODFELLOW I J, ZAREMBA W, et al. Improved techniques for training GANs[J]. arXiv:1606.03498, 2016.
[33] ODENA A. Semi-supervised learning with generative adver-sarial networks[J]. arXiv:1606.01583, 2016.
[34] KHAN S. Convolution neural network and computer vision[M]. Beijing: China Machine Press, 2019.
KHAN S. 卷积神经网络与计算机视觉[M]. 黄智濒, 戴志涛, 译. 北京: 机械工业出版社, 2019.
[35] KRIZHEVSKY A, HINTON G. Learning multiple layers of features from tiny images[D]. Toronto: University of Toronto, 2009.
[36] COATES A, NG A, LEE H. An analysis of single-layer net-works in unsupervised feature learning[C]//Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, Apr 16-18, 2011: 215-223.
[37] LU J, GONG P H, YE J P. Learning from very few samples: a survey[J]. arXiv:2009.02653, 2020.
[38] BACHMAN P, HJELM R D, BUCHWALTER W. Learning representations by maximizing mutual information across views[C]//Proceedings of the Annual Conference on Neural Information Processing Systems, Vancouver, Dec 8-14, 2019: 15509-15519.
[39] HJELM R, FEDOROV A, LAVOIE-MARCHILDON S, et al. Learning deep representations by mutual information estimation[J]. arXiv:1808.06670, 2019.
[40] ZHAI X, OLIVER A, KOLESNIKOV A, et al. S4L: self-supervised semi-supervised learning[J]. arXiv:1905.03670, 2019.
[41] SCHMARJE L, SANTAROSSA M, KOCH K. A survey on semi, self and unsupervised learning for image classification[J]. arXiv:2002.08721, 2020.
[42] CHEN J A, YANG Z C, YANG D Y. MixText: linguistically-informed interpolation of hidden space for semi-supervised text classification[J]. arXiv:2004.12239, 2020.
[43] WANG P S, SONG Y, DAI L R. Fine-grained image class-ification with multi-channel visual attention[J]. Journal of Data Acquisition & Processing, 2019, 34(1): 157-166.
王培森, 宋彦, 戴礼荣. 基于多通道视觉注意力的细粒度图像分类[J]. 数据采集与处理, 2019, 34(1): 157-166.
[44] CHENG W J, CHEN W Q. Hyperspectral image classification based on MCFFN-attention[J]. Computer Engineering and Applications, 2020, 56(24): 201-206.
程文娟, 陈文强. 基于MCFFN-Attention的高光谱图像分类[J]. 计算机工程与应用, 2020, 56(24): 201-206. |