[1] YAROWSKY D. Unsupervised word sense disambiguation rivaling supervised methods[C]//Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics, Cambridge, Jun 26-30, 1995. Stroudsburg: ACL, 1995: 189-196.
[2] TANHA J, VAN S M, AFSARMANESH H. Semi-supervised self-training for decision tree classifiers[J]. International Journal of Machine Learning & Cybernetics, 2017, 8(1): 355-370.
[3] HALDER A, GHOSH S, GHOSH A. Aggregation pheromone metaphor for semi-supervised classification[J]. Pattern Reco- gnition, 2013, 46(8): 2239-2248.
[4] WANG W, ZHOU Z H. A new analysis of co-training[C]//Proceedings of the 27th International Conference on Machine Learning, Haifa, Jun 21-24, 2010: 1135-1142.
[5] BLUM A, MITCHELL T. Combining labeled and unlabeled data with co-training[C]//Proceedings of the 11th Annual Conference on Computational Learning Theory, Madison, Jul 24-26, 1998. New York: ACM, 1998: 92-100.
[6] ZHOU Z H, LI M. Tri-training: exploiting unlabeled data using three classifiers[J]. IEEE Transactions on Knowledge and Data Engineering, 2005, 17(11): 1529-1541.
[7] YODER J, PRIEBE C E. Semi-supervised k-means++[J]. Journal of Statistical Computation and Simulation, 2017, 87(13): 2597-2608.
[8] ZHU X J. Semi-supervised learning literature survey[R]. Madison: University of Wisconsin-Madison, 2008.
[9] 杜红乐, 滕少华, 张燕. 协同标注的直推式支持向量机算法[J]. 小型微型计算机系统, 2016, 37(11): 2443-2447.
DU H L, TENG S H, ZHANG Y. Transductive support vector machine based on cooperative labeling[J]. Journal of Chinese Computer Systems, 2016, 37(11): 2443-2447.
[10] CEVIKALP H, FRANC V. Large-scale robust transductive support vector machines[J]. Neurocomputing, 2017, 235(1): 199-209.
[11] LUO Y, ZHU J, LI M, et al. Smooth neighbors on teacher graphs for semi-supervised learning[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-21, 2018. Washington: IEEE Computer Society, 2018: 8896-8905.
[12] VERMA V, KAWAGUCHI K, LAMB A, et al. Interpolation consistency training for semi-supervised learning[J]. Neural Networks, 2022, 145: 90-106.
[13] BERTHELOT D, CARLINI N, GOODFELLOW I, et al. MixMatch: a holistic approach to semi-supervised learning[C]//Advances in Neural Information Processing Systems 32, Vancouver, Dec 8-14, 2019: 5050-5060.
[14] SOHN K, BERTHELOT D, CARLINI N, et al. FixMatch: simplifying semi-supervised learning with consistency and confidence[C]//Advances in Neural Information Processing Systems 33, Dec 6-12, 2020: 596-608.
[15] YANG Y, XU Z. Rethinking the value of labels for improving class-imbalanced learning[C]//Advances in Neural Information Processing Systems 33, Dec 6-12, 2020: 19290-19301.
[16] KIM J, HUR Y, PARK S, et al. Distribution aligning refinery of pseudo-label for imbalanced semi-supervised learning[C]//Advances in Neural Information Processing Systems 33, Dec 6-12, 2020: 14567-14579.
[17] CHEN Y, ZHU X, LI W, et al. Semi-supervised learning under class distribution mismatch[J]. The Association for the Advance of Artificial Intelligence, 2020, 34(4): 3569-3576.
[18] HAN H, MA W, ZHOU M, et al. A novel semi-supervised learning approach to pedestrian reidentification[J]. IEEE Internet of Things Journal, 2020, 8(4): 3042-3052.
[19] XU Y, SHANG L, YE J, et al. Dash: semi-supervised learning with dynamic thresholding[C]//Proceedings of the 38th International Conference on Machine Learning, Jul 18-24, 2021: 11525-11536.
[20] ZHANG B, WANG Y, HOU W, et al. FlexMatch: boosting semi-supervised learning with curriculum pseudo labeling[C]//Advances in Neural Information Processing Systems 34, Dec 6-14, 2021: 18408-18419.
[21] FENG Z, ZHOU Q, GU Q, et al. DMT: dynamic mutual training for semi-supervised learning[J]. Pattern Recognition, 2022, 130: 108777.
[22] VAN ENGELEN J E, HOOS H H. A survey on semi-supervised learning[J]. Machine Learning, 2020, 109(2): 373-440.
[23] CHONG Y, DING Y, YAN Q, et al. Graph-based semi-supervised learning: a review[J]. Neurocomputing, 2020, 408: 216-230.
[24] HUANG G B, ZHU Q Y, SIEW C K. Extreme learning machine: theory and applications[J]. Neurocomputing, 2006, 70(1): 489-501.
[25] HUANG G B, ZHOU H, DING X, et al. Extreme learning machine for regression and multiclass classification[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B(Cybernetics), 2012, 42(2): 513-529.
[26] SCHMIDT W F, KRAAIJVELD M A, DUIN R P W. Feedforward neural networks with random weights[C]//Proceedings of the 11th IAPR International Conference on Pattern Recognition Methodology and Systems, Hague, Aug 30-Sep 3, 1992. Piscataway: IEEE, 1992: 1-4.
[27] ZHAO J, WANG Z, CAO F, et al. A local learning algorithm for random weights networks[J]. Knowledge-Based Systems, 2015, 74(1): 159-166.
[28] CAO F, WANG D, ZHU H, et al. An iterative learning algorithm for feedforward neural networks with random weights[J]. Information Sciences, 2016, 328: 546-557.
[29] MOORE E H. On the reciprocal of the general algebraic matrix(abstract)[J]. Bulletin of the American Mathematical Society, 1920, 26: 394-395.
[30] JOE H. Estimation of entropy and other functionals of a multivariate density[J]. Annals of the Institute of Statistical Mathematics, 1989, 41(1): 683-697.
[31] GERTTON A, BORGWARDT K M, RASCH M J, et al. A kernel two-sample test[J]. Journal of Machine Learning Research, 2012, 13(1): 723-773.
[32] HUANG G B, CHEN L, SIEW C K. Universal approxima-tion using incremental constructive feedforward networks with random hidden nodes[J]. IEEE Transactions on Neural Networks, 2006, 17(4): 879-892. |