[1] |
SOHN K, BERTHELOT D, LI C L, et al. FixMatch: simplifying semi-supervised learning with consistency and confidence[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Dec 6-12, 2020. Red Hook: Curran Associates, 2020: 596-608.
|
[2] |
LEE D. Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks[C]// Proceedings of the 30th International Conference on Machine Learning, Atlanta, Jun 16-21, 2013.
|
[3] |
王曙燕, 金航, 孙家泽. GAN图像对抗样本生成方法[J]. 计算机科学与探索, 2021, 15(4): 702-711.
DOI
|
|
WANG S Y, JIN H, SUN J Z. GAN image adversarial sample generation method[J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(4): 702-711.
|
[4] |
RASMUS A, VALPOLA H, HONKALA M, et al. Semi-supervised learning with ladder networks[C]// Proceedings of the 29th International Conference on Neural Information Processing Systems, Montreal, Dec 7-12, 2015. Red Hook: Curran Associates, 2015: 3546-3554.
|
[5] |
WANG X, KIHARA D, LUO J B, et al. EnAET: a self-trained framework for semi-supervised and supervised lear-ning with ensemble transformations[J]. IEEE Transactions on Image Processing, 2021, 30: 1639-1647.
DOI
URL
|
[6] |
GRANDVALET Y, BENGIO Y. Semi-supervised learning by entropy minimization[C]// Proceedings of the 18th International Conference on Neural Information Processing Systems, Vancouver, Dec 13-18, 2004. Red Hook: Curran Associates, 2004: 529-536.
|
[7] |
REN Z Z, YEH R, SCHWING A. Not all unlabeled data are equal: learning to weight data in semi-supervised learning[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Dec 6-12, 2020. Red Hook: Curran Associates, 2020: 21786-21797.
|
[8] |
MIYATO T, MAEDA S, KOYAMA M, et al. Virtual adversarial training: a regularization method for supervised and semi-supervised learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(8): 1979-1993.
DOI
PMID
|
[9] |
LAINE S, AILA T. Temporal ensembling for semi-supervised learning[C]// Proceedings of the 5th International Conference on Learning Representations, Toulon, Apr 24-26, 2017.
|
[10] |
TARVAINEN A, VALPOLA H. Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, Dec 4-9, 2017. Red Hook: Curran Associates, 2017: 1195-1204.
|
[11] |
ZHANG H Y, CISSE M, DAUPHIN Y N, et al. Mixup: beyond empirical risk minimization[C]// Proceedings of the 6th International Conference on Learning Representations, Vancouver, Apr 30 - May 3, 2018.
|
[12] |
DEVRIES T, TAYLOR G W. Improved regularization of convolutional neural networks with Cutout[J]. arXiv:1708. 04552, 2017.
|
[13] |
CUBUK E D, ZOPH B, SHLENS J, et al. RandAugment: practical automated data augmentation with a reduced search space[C]// Proceedings of the 30th Conference on Computer Vision and Pattern Recognition, Seattle, Jun 14-19, 2020. Washington: IEEE Computer Society, 2020: 702-703.
|
[14] |
BERTHELOT D, CARLINI N, GOODFELLOW I, et al. MixMatch: a holistic approach to semi-supervised learning[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, Dec 8-14, 2019. Red Hook: Curran Associates, 2019: 5050-5060.
|
[15] |
ZAGORUYKO S, KOMODAKIS N. Wide residual networks[C]// Proceedings of the 27th British Machine Vision Conference, York, Sep 19-22, 2016. Durham: BMVA, 2016.
|
[16] |
SUTSKEVER I, MARTENS J, DAHL G, et al. On the importance of initialization and momentum in deep learning[C]// Proceedings of the 30th International Conference on Machine Learning, Atlanta, Jun 16-21, 2013: 1139-1147.
|