[1] ELREEDY D, ATIYA A F, KAMALOV F. A theoretical distribution analysis of synthetic minority oversampling technique (SMOTE) for imbalanced learning[J]. Machine Learning, 2024, 113(7): 4903-4923.
[2] LI Z C, HUANG M, LIU G J, et al. A hybrid method with dynamic weighted entropy for handling the problem of class imbalance with overlap in credit card fraud detection[J]. Expert Systems with Applications, 2021, 175: 114750.
[3] REZAEIPANAH A, AHMADI G. Breast cancer diagnosis using multi-stage weight adjustment in the MLP neural network[J]. The Computer Journal, 2022, 65(4): 788-804.
[4] ZHENG M, LI T, ZHENG X Y, et al. UFFDFR: undersampling framework with denoising, fuzzy c-means clustering, and representative sample selection for imbalanced data classification[J]. Information Sciences, 2021, 576: 658-680.
[5] TENG Z Y, CAO P, HUANG M, et al. Multi-label borderline oversampling technique[J]. Pattern Recognition, 2024, 145: 109953.
[6] DEVI D, BISWAS S K, PURKAYASTHA B. Correlation-based oversampling aided cost sensitive ensemble learning technique for treatment of class imbalance[J]. Journal of Experimental & Theoretical Artificial Intelligence, 2022, 34(1): 143-174.
[7] GUO H X, LI Y J, SHANG J, et al. Learning from class-imbalanced data: review of methods and applications[J]. Expert Systems with Applications, 2017, 73: 220-239.
[8] HAIBO H E, GARCIA E A. Learning from imbalanced data[J]. IEEE Transactions on Knowledge and Data Engineering, 2009, 21(9): 1263-1284.
[9] DUBEY H, PUDI V. Class based weighted K-nearest neighbor over imbalance dataset[C]//Proceedings of the 2013 Pacific-Asia Conference on Knowledge Discovery and Data Mining. Berlin, Heidelberg: Springer, 2013: 305-316.
[10] FAN W, STOLFO S J, ZHANG J X, et al. AdaCost: misclassification cost-sensitive boosting[C]//Proceedings of the 16th International Conference on Machine Learning. San Francisco: Morgan Kaufmann, 1999: 97-105.
[11] ELREEDY D, ATIYA A F. A comprehensive analysis of synthetic minority oversampling technique (SMOTE) for handling class imbalance[J]. Information Sciences, 2019, 505: 32-64.
[12] ZHU Y W, YAN Y T, ZHANG Y W, et al. EHSO: evolutionary hybrid sampling in overlapping scenarios for imbalanced learning[J]. Neurocomputing, 2020, 417: 333-346.
[13] CHAWLA N V, BOWYER K W, HALL L O, et al. SMOTE: synthetic minority over-sampling technique[J]. Journal of Artificial Intelligence Research, 2002, 16: 321-357.
[14] HAN H, WANG W Y, MAO B H. Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning[C]//Proceedings of the 2005 International Conference on Intelligent Computing. Berlin, Heidelberg: Springer, 2005: 878-887.
[15] HE H B, BAI Y, GARCIA E A, et al. ADASYN: adaptive synthetic sampling approach for imbalanced learning[C]//Proceedings of the 2008 IEEE International Joint Conference on Neural Networks. Piscataway: IEEE, 2008: 1322-1328.
[16] DOUZAS G, BACAO F, LAST F. Improving imbalanced learning through a heuristic oversampling method based on k-means and SMOTE[J]. Information Sciences, 2018, 465: 1-20.
[17] KUNAKORNTUM I, HINTHONG W, PHUNCHONGHARN P. A synthetic minority based on probabilistic distribution (SyMProD) oversampling for imbalanced datasets[J]. IEEE Access, 2020, 8: 114692-114704.
[18] SA?LAM F, ALI CENGIZ M. A novel SMOTE-based re-sampling technique trough noise detection and the boosting procedure[J]. Expert Systems with Applications, 2022, 200: 117023.
[19] BARUA S, ISLAM M M, YAO X, et al. MWMOTE: majority weighted minority oversampling technique for imbalanced data set learning[J]. IEEE Transactions on Knowledge and Data Engineering, 2014, 26(2): 405-425.
[20] LENG Q K, GUO J M, JIAO E J, et al. NanBDOS: adaptive and parameter-free borderline oversampling via natural neighbor search for class-imbalance learning[J]. Knowledge-Based Systems, 2023, 274: 110665.
[21] KRISHNA K, NARASIMHA MURTY M. Genetic K-means algorithm[J]. IEEE Transactions on Systems, Man, and Cyber-netics, Part B (Cybernetics), 1999, 29(3): 433-439.
[22] DASGUPTA S. Performance guarantees for hierarchical clustering[C]//Proceedings of the 2002 International Conference on Computational Learning Theory. Berlin, Heidelberg: Springer, 2002: 351-363.
[23] RODRIGUEZ A, LAIO A. Clustering by fast search and find of density peaks[J]. Science, 2014, 344(6191): 1492-1496.
[24] CIESLAK D A, CHAWLA N V, STRIEGEL A. Combating imbalance in network intrusion datasets[C]//Proceedings of the 2006 IEEE International Conference on Granular Computing. Piscataway: IEEE, 2006: 732-737.
[25] LIU Y X, LIU Y, YU B X B, et al. Noise-robust oversampling for imbalanced data classification[J]. Pattern Recognition, 2023, 133: 109008.
[26] TAO X M, GUO X Y, ZHENG Y J, et al. Self-adaptive oversampling method based on the complexity of minority data in imbalanced datasets classification[J]. Knowledge-Based Systems, 2023, 277: 110795.
[27] MORENO-TORRES J G, SAEZ J A, HERRERA F. Study on the impact of partition-induced dataset shift on k-fold cross-validation[J]. IEEE Transactions on Neural Networks and Learning Systems, 2012, 23(8): 1304-1312.
[28] ASUNCION A, NEWMAN D J. UCI machine learning repository[EB/OL]. [2024-09-19]. http://www.ics.uci.edu/~mlearn/ MLRepository.html.
[29] FREUND Y, SCHAPIRE R E. A decision-theoretic generalization of on-line learning and an application to boosting[J]. Journal of Computer and System Sciences, 1997, 55(1): 119-139.
[30] HASTIE T, ROSSET S, ZHU J, et al. Multi-class AdaBoost[J]. Statistics and Its Interface, 2009, 2(3): 349-360.
[31] 张召悦, 阳颖. 基于聚类和AdaBoost的ADS-B数据质量综合评估方法[J]. 航空学报, 2024, 45(13): 329584.
ZHANG Z Y, YANG Y. A comprehensive evaluation method of ADS-B data quality based on clustering and AdaBoost[J]. Acta Aeronautica et Astronautica Sinica, 2024, 45(13): 329584.
[32] ROJAS R. Neural networks: a systematic introduction[M]. Springer Science & Business Media, 2013.
[33] FRIEDMAN J H. Greedy function approximation: a gradient boosting machine[J]. The Annals of Statistics, 2001, 29(5): 1189-1232.
[34] FRIEDMAN J H. Stochastic gradient boosting[J]. Computational Statistics & Data Analysis, 2002, 38(4): 367-378.
[35] HASTIE T, TIBSHIRANI R, FRIEDMAN J H, et al. The elements of statistical learning: data mining, inference, and prediction[M]. New York: Springer, 2009.
[36] DEMs?AR J.?Statistical comparisons of classifiers over multiple data sets[J]. The Journal of Machine Learning Research, 2006, 7: 1-30.
[37] GARCíA S, FERNáNDEZ A, LUENGO J, et al. A study of statistical techniques and performance measures for genetics-based machine learning: accuracy and interpretability[J]. Soft Computing, 2009, 13(10): 959-977.
[38] GARCi?A S, FERNa?NDEZ A, LUNEGO J, et al. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power[J]. Information Sciences, 2010, 180(10): 2044-2064. |