计算机科学与探索 ›› 2022, Vol. 16 ›› Issue (11): 2619-2627.DOI: 10.3778/j.issn.1673-9418.2104117
收稿日期:
2021-04-30
修回日期:
2021-06-15
出版日期:
2022-11-01
发布日期:
2021-06-17
通讯作者:
+ E-mail: jhdai@hunnu.edu.cn作者简介:
柳叶(1996—),女,江西人,硕士研究生,主要研究方向为知识发现、人工智能。基金资助:
LIU Ye1,2, DAI Jianhua1,2,+(), CHEN Jiaolong1,2
Received:
2021-04-30
Revised:
2021-06-15
Online:
2022-11-01
Published:
2021-06-17
About author:
LIU Ye, born in 1996, M.S. candidate. Her research interests include knowledge discovery and artificial intelligence.Supported by:
摘要:
粗糙集中的属性选择有着十分重要的应用价值。现有的属性选择方法大多忽视了衡量待选属性所提供的分类信息和冗余信息,以及新增待选属性时已选属性所保留的分类信息三者之间的关联。因此,首先利用传统互信息,定义了有效分类信息率的属性重要性评估函数,并提出了一种基于有效分类信息率的属性选择方法。该属性选择方法可以有效地选择能提供大量有效分类信息同时携带较少冗余信息的待选属性。另外,考虑到新增待选属性对已选属性所保留的分类信息的影响,进一步提出了独立有效分类信息率的概念,并构造一种基于独立分类有效信息率的改进属性选择方法。该改进的属性选择方法能够有助于平衡属性的有效分类信息和冗余信息的关系,同时提高属性子集的整体识别能力。最后,从分类性能和统计学检验等方面分别与现有的属性选择方法进行了对比实验,实验结果表明了所提出的两种属性选择方法的有效性。
中图分类号:
柳叶, 代建华, 陈姣龙. 最大化独立有效分类信息率的属性选择[J]. 计算机科学与探索, 2022, 16(11): 2619-2627.
LIU Ye, DAI Jianhua, CHEN Jiaolong. Attribute Selection via Maximizing Independent-and-Effective Classification Information Ratio[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(11): 2619-2627.
Dataset | Samples | Attributes | Classes |
---|---|---|---|
Arrh | 452 | 206 | 13 |
Car | 1 728 | 6 | 4 |
Chess | 3 196 | 36 | 2 |
Clean1 | 476 | 166 | 2 |
Colon | 62 | 2 000 | 2 |
Glass | 214 | 9 | 7 |
Libras | 360 | 90 | 15 |
Lung | 73 | 326 | 7 |
Lymph | 148 | 18 | 4 |
Musk2 | 707 | 166 | 2 |
Vote | 435 | 16 | 2 |
Wpbc33 | 198 | 32 | 2 |
Zoo | 101 | 16 | 7 |
表1 基准数据的描述
Table 1 Description of benchmark dataset
Dataset | Samples | Attributes | Classes |
---|---|---|---|
Arrh | 452 | 206 | 13 |
Car | 1 728 | 6 | 4 |
Chess | 3 196 | 36 | 2 |
Clean1 | 476 | 166 | 2 |
Colon | 62 | 2 000 | 2 |
Glass | 214 | 9 | 7 |
Libras | 360 | 90 | 15 |
Lung | 73 | 326 | 7 |
Lymph | 148 | 18 | 4 |
Musk2 | 707 | 166 | 2 |
Vote | 435 | 16 | 2 |
Wpbc33 | 198 | 32 | 2 |
Zoo | 101 | 16 | 7 |
Dataset | Acc/% | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
DISR | NJMIM | GainRatio | MIFS | mRMR | NMIFS | CIFE | MRI | ASECIR | ASIECIR | |
Arrh | 62.42 | 62.85 | 60.42 | 61.74 | 61.74 | 62.85 | 56.21 | 61.08 | 60.86 | 61.30 |
Car | 91.03 | 91.03 | 91.03 | 91.03 | 91.03 | 91.03 | 91.03 | 91.03 | 95.95 | 95.95 |
Chess | 95.78 | 95.90 | 95.62 | 94.49 | 95.96 | 95.59 | 95.84 | 96.06 | 96.28 | 96.28 |
Clean1 | 87.60 | 82.54 | 84.87 | 85.70 | 83.81 | 83.82 | 83.38 | 84.45 | 84.24 | 85.90 |
Colon | 90.24 | 95.24 | 86.9 | 91.67 | 88.33 | 91.90 | 88.81 | 95.00 | 93.33 | 93.33 |
Glass | 57.84 | 57.84 | 56.97 | 58.38 | 55.54 | 54.59 | 57.42 | 57.42 | 57.38 | 57.86 |
Libras | 61.94 | 70.00 | 64.17 | 73.33 | 66.67 | 61.39 | 65.56 | 66.11 | 68.89 | 70.28 |
Lung | 92.14 | 89.11 | 90.54 | 89.11 | 91.79 | 91.96 | 77.14 | 91.96 | 87.68 | 89.46 |
Lymph | 80.29 | 78.33 | 80.43 | 74.33 | 73.57 | 75.71 | 81.14 | 79.14 | 74.05 | 76.33 |
Musk2 | 90.36 | 91.51 | 91.80 | 90.38 | 92.07 | 90.80 | 93.07 | 91.23 | 91.09 | 91.66 |
Vote | 93.31 | 92.63 | 92.18 | 94.01 | 93.56 | 93.56 | 92.41 | 92.40 | 96.55 | 97.02 |
Wpbc33 | 67.24 | 70.26 | 74.32 | 69.74 | 72.71 | 71.82 | 67.79 | 67.79 | 74.79 | 74.79 |
Zoo | 92.88 | 93.57 | 93.10 | 92.65 | 93.79 | 93.79 | 91.72 | 94.71 | 93.55 | 93.57 |
Avg. Acc/% | 81.77 | 82.37 | 81.72 | 82.04 | 81.58 | 81.45 | 80.12 | 82.18 | 82.66 | 83.36 |
Avg. Rank | 5.77 | 5.04 | 6.42 | 5.81 | 5.62 | 6.04 | 6.96 | 5.08 | 5.08 | 3.19 |
表2 在KNN分类器上不同方法的分类精度
Table 2 Classification accuracy with different methods by KNN classifier
Dataset | Acc/% | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
DISR | NJMIM | GainRatio | MIFS | mRMR | NMIFS | CIFE | MRI | ASECIR | ASIECIR | |
Arrh | 62.42 | 62.85 | 60.42 | 61.74 | 61.74 | 62.85 | 56.21 | 61.08 | 60.86 | 61.30 |
Car | 91.03 | 91.03 | 91.03 | 91.03 | 91.03 | 91.03 | 91.03 | 91.03 | 95.95 | 95.95 |
Chess | 95.78 | 95.90 | 95.62 | 94.49 | 95.96 | 95.59 | 95.84 | 96.06 | 96.28 | 96.28 |
Clean1 | 87.60 | 82.54 | 84.87 | 85.70 | 83.81 | 83.82 | 83.38 | 84.45 | 84.24 | 85.90 |
Colon | 90.24 | 95.24 | 86.9 | 91.67 | 88.33 | 91.90 | 88.81 | 95.00 | 93.33 | 93.33 |
Glass | 57.84 | 57.84 | 56.97 | 58.38 | 55.54 | 54.59 | 57.42 | 57.42 | 57.38 | 57.86 |
Libras | 61.94 | 70.00 | 64.17 | 73.33 | 66.67 | 61.39 | 65.56 | 66.11 | 68.89 | 70.28 |
Lung | 92.14 | 89.11 | 90.54 | 89.11 | 91.79 | 91.96 | 77.14 | 91.96 | 87.68 | 89.46 |
Lymph | 80.29 | 78.33 | 80.43 | 74.33 | 73.57 | 75.71 | 81.14 | 79.14 | 74.05 | 76.33 |
Musk2 | 90.36 | 91.51 | 91.80 | 90.38 | 92.07 | 90.80 | 93.07 | 91.23 | 91.09 | 91.66 |
Vote | 93.31 | 92.63 | 92.18 | 94.01 | 93.56 | 93.56 | 92.41 | 92.40 | 96.55 | 97.02 |
Wpbc33 | 67.24 | 70.26 | 74.32 | 69.74 | 72.71 | 71.82 | 67.79 | 67.79 | 74.79 | 74.79 |
Zoo | 92.88 | 93.57 | 93.10 | 92.65 | 93.79 | 93.79 | 91.72 | 94.71 | 93.55 | 93.57 |
Avg. Acc/% | 81.77 | 82.37 | 81.72 | 82.04 | 81.58 | 81.45 | 80.12 | 82.18 | 82.66 | 83.36 |
Avg. Rank | 5.77 | 5.04 | 6.42 | 5.81 | 5.62 | 6.04 | 6.96 | 5.08 | 5.08 | 3.19 |
Dataset | Acc/% | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
DISR | NJMIM | GainRatio | MIFS | mRMR | NMIFS | CIFE | MRI | ASECIR | ASIECIR | |
Arrh | 58.19 | 56.86 | 56.42 | 59.73 | 58.86 | 56.86 | 55.55 | 54.41 | 56.64 | 57.09 |
Car | 95.37 | 95.37 | 94.97 | 95.37 | 95.37 | 95.37 | 95.37 | 95.37 | 96.99 | 96.99 |
Chess | 99.25 | 98.69 | 99.25 | 97.50 | 99.25 | 99.12 | 99.12 | 99.25 | 99.09 | 99.19 |
Clean1 | 82.78 | 81.72 | 79.21 | 82.55 | 83.40 | 80.04 | 79.39 | 80.90 | 81.91 | 83.63 |
Colon | 90.24 | 90.24 | 93.14 | 90.48 | 85.71 | 91.90 | 93.50 | 90.00 | 96.67 | 96.67 |
Glass | 65.8 | 65.80 | 55.15 | 63.07 | 64.46 | 64.46 | 63.51 | 63.48 | 66.26 | 66.26 |
Libras | 56.94 | 57.22 | 62.78 | 68.33 | 61.94 | 58.33 | 59.72 | 67.78 | 70.00 | 66.39 |
Lung | 61.61 | 64.11 | 64.64 | 55.89 | 61.61 | 61.61 | 68.93 | 65.54 | 61.43 | 60.36 |
Lymph | 69.52 | 70.90 | 75.57 | 72.29 | 71.52 | 73.57 | 71.52 | 72.24 | 76.95 | 82.43 |
Musk2 | 89.39 | 89.82 | 90.38 | 88.53 | 88.55 | 88.26 | 90.10 | 89.40 | 90.94 | 90.80 |
Vote | 97.01 | 97.01 | 97.01 | 95.40 | 96.55 | 96.55 | 95.87 | 96.78 | 96.55 | 96.79 |
Wpbc33 | 72.76 | 73.79 | 70.82 | 69.76 | 71.79 | 71.74 | 69.26 | 70.26 | 74.29 | 74.29 |
Zoo | 96.55 | 97.01 | 97.01 | 95.87 | 96.55 | 96.55 | 96.10 | 96.78 | 95.40 | 96.78 |
Avg. Acc/% | 79.65 | 79.89 | 79.72 | 79.60 | 79.66 | 79.57 | 79.84 | 80.17 | 81.78 | 82.13 |
Avg. Rank | 5.42 | 5.46 | 5.31 | 6.92 | 5.65 | 6.42 | 6.69 | 5.85 | 4.23 | 3.04 |
表3 在C4.5分类器上不同方法的分类精度
Table 3 Classification accuracy with different methods by C4.5 classifier
Dataset | Acc/% | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
DISR | NJMIM | GainRatio | MIFS | mRMR | NMIFS | CIFE | MRI | ASECIR | ASIECIR | |
Arrh | 58.19 | 56.86 | 56.42 | 59.73 | 58.86 | 56.86 | 55.55 | 54.41 | 56.64 | 57.09 |
Car | 95.37 | 95.37 | 94.97 | 95.37 | 95.37 | 95.37 | 95.37 | 95.37 | 96.99 | 96.99 |
Chess | 99.25 | 98.69 | 99.25 | 97.50 | 99.25 | 99.12 | 99.12 | 99.25 | 99.09 | 99.19 |
Clean1 | 82.78 | 81.72 | 79.21 | 82.55 | 83.40 | 80.04 | 79.39 | 80.90 | 81.91 | 83.63 |
Colon | 90.24 | 90.24 | 93.14 | 90.48 | 85.71 | 91.90 | 93.50 | 90.00 | 96.67 | 96.67 |
Glass | 65.8 | 65.80 | 55.15 | 63.07 | 64.46 | 64.46 | 63.51 | 63.48 | 66.26 | 66.26 |
Libras | 56.94 | 57.22 | 62.78 | 68.33 | 61.94 | 58.33 | 59.72 | 67.78 | 70.00 | 66.39 |
Lung | 61.61 | 64.11 | 64.64 | 55.89 | 61.61 | 61.61 | 68.93 | 65.54 | 61.43 | 60.36 |
Lymph | 69.52 | 70.90 | 75.57 | 72.29 | 71.52 | 73.57 | 71.52 | 72.24 | 76.95 | 82.43 |
Musk2 | 89.39 | 89.82 | 90.38 | 88.53 | 88.55 | 88.26 | 90.10 | 89.40 | 90.94 | 90.80 |
Vote | 97.01 | 97.01 | 97.01 | 95.40 | 96.55 | 96.55 | 95.87 | 96.78 | 96.55 | 96.79 |
Wpbc33 | 72.76 | 73.79 | 70.82 | 69.76 | 71.79 | 71.74 | 69.26 | 70.26 | 74.29 | 74.29 |
Zoo | 96.55 | 97.01 | 97.01 | 95.87 | 96.55 | 96.55 | 96.10 | 96.78 | 95.40 | 96.78 |
Avg. Acc/% | 79.65 | 79.89 | 79.72 | 79.60 | 79.66 | 79.57 | 79.84 | 80.17 | 81.78 | 82.13 |
Avg. Rank | 5.42 | 5.46 | 5.31 | 6.92 | 5.65 | 6.42 | 6.69 | 5.85 | 4.23 | 3.04 |
Dataset | Acc/% | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
DISR | NJMIM | GainRatio | MIFS | mRMR | NMIFS | CIFE | MRI | ASECIR | ASIECIR | |
Arrh | 70.36 | 69.90 | 67.70 | 67.05 | 69.91 | 70.57 | 68.15 | 70.13 | 69.25 | 68.37 |
Car | 83.45 | 83.45 | 83.45 | 83.45 | 83.45 | 83.45 | 83.45 | 83.45 | 83.57 | 83.57 |
Chess | 95.71 | 95.03 | 95.46 | 93.77 | 95.34 | 95.12 | 95.65 | 95.62 | 95.53 | 95.53 |
Clean1 | 81.32 | 79.43 | 82.98 | 81.94 | 81.53 | 81.33 | 80.43 | 80.68 | 82.35 | 82.99 |
Colon | 91.67 | 90.00 | 88.57 | 85.48 | 90.00 | 93.33 | 88.57 | 88.57 | 90.24 | 90.24 |
Glass | 46.28 | 46.75 | 48.14 | 47.71 | 47.73 | 47.73 | 47.73 | 47.71 | 49.16 | 48.20 |
Libras | 63.61 | 60.28 | 61.94 | 69.44 | 61.67 | 57.22 | 63.89 | 63.06 | 64.72 | 66.94 |
Lung | 80.46 | 81.09 | 80.90 | 80.90 | 80.05 | 80.06 | 76.68 | 77.73 | 80.66 | 81.30 |
Lymph | 75.62 | 74.24 | 79.00 | 74.90 | 76.24 | 79.05 | 78.48 | 76.95 | 78.29 | 81.00 |
Musk2 | 88.68 | 90.81 | 82.18 | 86.14 | 91.94 | 91.80 | 88.97 | 90.10 | 90.81 | 91.37 |
Vote | 96.09 | 95.63 | 96.55 | 96.55 | 96.32 | 96.32 | 96.55 | 96.79 | 96.32 | 96.10 |
Wpbc33 | 77.27 | 65.65 | 62.62 | 59.59 | 72.72 | 70.70 | 74.24 | 76.26 | 76.26 | 77.27 |
Zoo | 90.09 | 89.09 | 94.00 | 94.09 | 90.09 | 93.09 | 96.00 | 96.00 | 96.00 | 95.00 |
Avg. Acc/% | 80.05 | 78.57 | 78.73 | 78.54 | 79.77 | 79.98 | 79.91 | 80.23 | 81.01 | 81.38 |
Avg. Rank | 5.73 | 7.58 | 5.85 | 6.81 | 5.96 | 5.27 | 5.65 | 5.35 | 3.73 | 3.08 |
表4 在SVM分类器上不同方法的分类精度
Table 4 Classification accuracy with different methods by SVM classifier
Dataset | Acc/% | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
DISR | NJMIM | GainRatio | MIFS | mRMR | NMIFS | CIFE | MRI | ASECIR | ASIECIR | |
Arrh | 70.36 | 69.90 | 67.70 | 67.05 | 69.91 | 70.57 | 68.15 | 70.13 | 69.25 | 68.37 |
Car | 83.45 | 83.45 | 83.45 | 83.45 | 83.45 | 83.45 | 83.45 | 83.45 | 83.57 | 83.57 |
Chess | 95.71 | 95.03 | 95.46 | 93.77 | 95.34 | 95.12 | 95.65 | 95.62 | 95.53 | 95.53 |
Clean1 | 81.32 | 79.43 | 82.98 | 81.94 | 81.53 | 81.33 | 80.43 | 80.68 | 82.35 | 82.99 |
Colon | 91.67 | 90.00 | 88.57 | 85.48 | 90.00 | 93.33 | 88.57 | 88.57 | 90.24 | 90.24 |
Glass | 46.28 | 46.75 | 48.14 | 47.71 | 47.73 | 47.73 | 47.73 | 47.71 | 49.16 | 48.20 |
Libras | 63.61 | 60.28 | 61.94 | 69.44 | 61.67 | 57.22 | 63.89 | 63.06 | 64.72 | 66.94 |
Lung | 80.46 | 81.09 | 80.90 | 80.90 | 80.05 | 80.06 | 76.68 | 77.73 | 80.66 | 81.30 |
Lymph | 75.62 | 74.24 | 79.00 | 74.90 | 76.24 | 79.05 | 78.48 | 76.95 | 78.29 | 81.00 |
Musk2 | 88.68 | 90.81 | 82.18 | 86.14 | 91.94 | 91.80 | 88.97 | 90.10 | 90.81 | 91.37 |
Vote | 96.09 | 95.63 | 96.55 | 96.55 | 96.32 | 96.32 | 96.55 | 96.79 | 96.32 | 96.10 |
Wpbc33 | 77.27 | 65.65 | 62.62 | 59.59 | 72.72 | 70.70 | 74.24 | 76.26 | 76.26 | 77.27 |
Zoo | 90.09 | 89.09 | 94.00 | 94.09 | 90.09 | 93.09 | 96.00 | 96.00 | 96.00 | 95.00 |
Avg. Acc/% | 80.05 | 78.57 | 78.73 | 78.54 | 79.77 | 79.98 | 79.91 | 80.23 | 81.01 | 81.38 |
Avg. Rank | 5.73 | 7.58 | 5.85 | 6.81 | 5.96 | 5.27 | 5.65 | 5.35 | 3.73 | 3.08 |
[1] | PAWLAK Z. Rough sets[J]. International Journal of Computer & Information Sciences, 1982, 11(5): 341-356. |
[2] | PAWLAK Z. Rough sets and intelligent data analysis[J]. Infor- mation Sciences, 2002, 147(2): 1-12. |
[3] |
NI P, ZHAO S Y, WANG X Z, et al. PARA: a positive-region based attribute reduction accelerator[J]. Information Sciences, 2019, 503: 533-550.
DOI |
[4] |
倪鹏, 刘阳明, 赵素云, 等. 动态模糊粗糙特征选取算法[J]. 计算机科学与探索, 2020, 14(2): 236-243.
DOI |
NI P, LIU Y M, ZHAO S Y, et al. Dynamic fuzzy rough feature selection algorithm[J]. Journal of Frontiers of Computer Science and Technology, 2020, 14(2): 236-243.
DOI |
|
[5] |
KONG L H, QU W H, YU J D, et al. Distributed feature selection for big data using fuzzy rough sets[J]. IEEE Transactions on Fuzzy Systems, 2020, 28 (5): 846-857.
DOI URL |
[6] |
DAI J H, HU Q H, HU H, et al. Neighbor inconsistent pair selection for attribute reduction by rough set approach[J]. IEEE Transactions on Fuzzy Systems, 2018, 26(2): 937-950.
DOI URL |
[7] |
DAI J H, HU Q H, ZHANG J H, et al. Attribute selection for partially labeled categorical data by rough set approach[J]. IEEE Transactions on Cybernetics, 2017, 47(9): 2460-2471.
DOI PMID |
[8] |
YANG Y Y, CHEN D G, WANG H. Active sample selection based incremental algorithm for attribute reduction with rough sets[J]. IEEE Transactions on Fuzzy Systems, 2016, 25(4): 825-838.
DOI URL |
[9] | WANG C Z, HU Q H, WANG X Z, et al. Feature selection based on neighborhood discrimination index[J]. IEEE Tran-sactions on Neural Networks and Learning Systems, 2018, 29(7): 2986-2999. |
[10] | YANG H, MOODY J. Data visualization and feature selection: new algorithms for nongaussian data[C]// Proceedings of the 12th International Conference on Neural Information Processing Systems, Denver, Nov 29-Dec 4, 1999. Cambridge: MIT Press, 1999: 687-702. |
[11] | MEYER P E, BONTEMPI G. On the use of variable complementarity for feature selection in cancer classification[C]// LNCS 3907:Proceedings of the Workshops on Applications of Evolutionary Computation, Budapest, Apr 10-12, 2006. Berlin, Heidelberg: Springer, 2006: 91-102. |
[12] |
BENNASAR M, HICKS Y, SETCHI R. Feature selection using joint mutual information maximization[J]. Expert Systems with Applications, 2015, 42(22): 8520-8532.
DOI URL |
[13] | 贾平, 代建华, 潘云鹤, 等. 一种基于互信息增益率的新属性约简算法[J]. 浙江大学学报(工学版), 2006, 40(6): 1041- 1044. |
JIA P, DAI J H, PAN Y H, et al. Novel algorithm for attribute reduction based on mutual-information gain ratio[J]. Journal of Zhejiang University (Engineering Science), 2006, 40(6): 1041-1044. | |
[14] |
BATTITI R. Using mutual information for selecting features in supervised neural net learning[J]. IEEE Transactions on Neural Networks, 1994, 5(4): 537-550.
PMID |
[15] |
KWAK N, CHOI C. Input feature selection for classification problems[J]. IEEE Transactions on Neural Networks, 2002, 13(1): 143-159.
DOI PMID |
[16] |
PENG H C, LONG F H, DING C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(8): 1226-1238.
DOI URL |
[17] | LIN D H, TANG X O. Conditional infomax learning: an integrated framework for feature extraction and fusion[C]// LNCS 3951: Proceedings of the 9th European Conference on Computer Vision, Graz, May 7-13, 2006. Berlin, Heidelberg: Springer, 2006: 68-82. |
[18] |
ESTEVEZ P, TESMER M, PEREZ C, et al. Normalized mutual information feature selection[J]. IEEE Transactions on Neural Networks, 2009, 20(2): 189-201.
DOI PMID |
[19] |
WANG J, WEI J M, YANG Z L, et al. Feature selection by maximizing independent classification information[J]. IEEE Transactions on Knowledge and Data Engineering, 2017, 29(4): 828-841.
DOI URL |
[20] | 刘琼, 代建华, 陈姣龙. 区间值数据的代价敏感特征选择[J]. 南京大学学报(自然科学), 2021, 57(1): 121-129. |
LIU Q, DAI J H, CHEN J L. Cost-sensitive feature selection for interval-valued data[J]. Journal of Nanjing University (Natural Science), 2021, 57(1): 121-129. | |
[21] |
钱文彬, 黄琴, 王映龙, 等. 多标记不完备数据的特征选择算法[J]. 计算机科学与探索, 2019, 13(10): 1768-1780.
DOI |
IAN W B, HUANG Q, WANG Y L, et al. Feature selection algorithm in multi-label incomplete data[J]. Journal of Frontiers of Computer Science and Technology, 2019, 13(10): 1768-1780.
DOI |
|
[22] | 张文修, 吴伟志, 梁吉业, 等. 粗糙集理论与方法[M]. 北京: 科学出版社, 2001. |
ZHANG W X, WU W Z, LIANG J Y, et al. Rough set theory and method[M]. Beijing: Science Press, 2001. | |
[23] | DEMSAR J. Statistical comparisons of classifiers over multiple data sets[J]. Journal of Machine Learning Research, 2006, 7: 1-30. |
[1] | 唐晨, 赵杰煜, 叶绪伦, 郑阳, 俞书世. 动态图的链接预测模型[J]. 计算机科学与探索, 2022, 16(10): 2365-2376. |
[2] | 王金杰,李炜. 混合互信息和粒子群算法的多目标特征选择方法[J]. 计算机科学与探索, 2020, 14(1): 83-95. |
[3] | 荣垂田,李银银,王琰. 中文关键短语自动提取方法研究[J]. 计算机科学与探索, 2019, 13(9): 1481-1492. |
[4] | 饶亚,贾修一,李同军,商琳. 基于类间区分度的属性约简方法[J]. 计算机科学与探索, 2019, 13(8): 1422-1430. |
[5] | 马忱,姜高霞,王文剑. 面向函数型数据的动态互信息特征选择方法[J]. 计算机科学与探索, 2019, 13(1): 158-168. |
[6] | 夏维,王珊蕾,尹子都,岳昆. 基于互信息的知识图谱实体关联关系建模与补全[J]. 计算机科学与探索, 2018, 12(7): 1064-1074. |
[7] | 刘俊岭,李婷,孙焕良,于戈. 利用电子签到数据预测课程成绩[J]. 计算机科学与探索, 2018, 12(6): 908-917. |
[8] | 陈覃霞,刘盾,梁德翠. 粗糙集理论和信息熵的AHP改进方法[J]. 计算机科学与探索, 2018, 12(3): 484-493. |
[9] | 郭乐乐,林友芳,韩升. 利用有序互信息匹配包含非透明列的数据模式[J]. 计算机科学与探索, 2017, 11(9): 1389-1397. |
[10] | 李蒙蒙,徐伟华. 优势关系下变精度与程度“逻辑且”粗糙模糊集[J]. 计算机科学与探索, 2016, 10(2): 277-284. |
[11] | 黄伟婷,赵红,祝峰. 分治策略下的代价敏感属性选择回溯算法[J]. 计算机科学与探索, 2016, 10(10): 1451-1458. |
[12] | 张维,苗夺谦,李峰. WilsonTh数据剪辑在邻域粗糙协同分类中的应用[J]. 计算机科学与探索, 2014, 8(9): 1092-1100. |
[13] | 林姿琼,赵红. 代价敏感最优误差边界选择[J]. 计算机科学与探索, 2013, 7(12): 1146-1152. |
[14] | 周伟,王峰,王崇骏,谢俊元. 利用效用度挖掘核心药物及配伍规律[J]. 计算机科学与探索, 2013, 7(11): 994-1001. |
[15] | 王鑫,王熙照,陈建凯,翟俊海. 有序决策树的比较研究[J]. 计算机科学与探索, 2013, 7(11): 1018-1025. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||