计算机科学与探索 ›› 2022, Vol. 16 ›› Issue (1): 242-252.DOI: 10.3778/j.issn.1673-9418.2009020
收稿日期:
2020-09-08
修回日期:
2020-11-06
出版日期:
2022-01-01
发布日期:
2020-11-19
通讯作者:
+ E-mail: minmeng@gdut.edu.cn作者简介:
刘宇(1996—),男,湖南衡阳人,硕士研究生,主要研究方向为图像处理、人脸识别。基金资助:
LIU Yu, MENG Min+(), WU Jigang
Received:
2020-09-08
Revised:
2020-11-06
Online:
2022-01-01
Published:
2020-11-19
About author:
LIU Yu, born in 1996, M.S. candidate. His research interests include image processing and face recognition.Supported by:
摘要:
由于传统半监督模式下的多视图算法很少考虑到不同视图中数据包含信息的差异性,且忽视了不同视图间存在着空间结构的一致性,算法在含有噪声和异常点的多视图数据中性能较差。尽管有研究者已经提出了半监督多视图方法,但这些方法没有充分利用样本判别信息以及不同度量学习下的子空间结构信息,从而导致分类结果不理想。针对以上问题,提出了一致性约束的半监督多视图分类算法(SMCC)。首先,基于希尔伯特-施密特独立性准则(HSIC)加强对不同视图之间的一致性约束。然后,通过保留原始数据的空间局部流形结构进行特征投影来降低数据空间维度,并结合F范数约束提高算法的鲁棒性。进一步,对不同视图自适应地赋予相应的权重,降低在不同视图中数据含有不同特征信息与噪声污染的影响。最后,基于线性交替方向乘子法与特征分解方法对模型进行求解。在四个基准数据集上的实验结果表明,提出的算法能够捕获多视图数据中更多的有效判别信息,准确性得到了提高。
中图分类号:
刘宇, 孟敏, 武继刚. 一致性约束的半监督多视图分类[J]. 计算机科学与探索, 2022, 16(1): 242-252.
LIU Yu, MENG Min, WU Jigang. Semi-supervised Multi-view Classification via Consistency Constraints[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(1): 242-252.
符号 | 含义 |
---|---|
| 第v个视图的投影矩阵 |
| 第v个视图的样本矩阵 |
| 原始数据的标签矩阵 |
| 预测标签矩阵 |
| 相似矩阵 |
| 拉普拉斯矩阵 |
| 单位矩阵 |
| 全为1的列向量 |
| 列向量 |
| F范数与2范数 |
| 迹函数 |
| 秩函数 |
| 超参数 |
| 权重 |
表1 符号解释
Table 1 Symbolic interpretation
符号 | 含义 |
---|---|
| 第v个视图的投影矩阵 |
| 第v个视图的样本矩阵 |
| 原始数据的标签矩阵 |
| 预测标签矩阵 |
| 相似矩阵 |
| 拉普拉斯矩阵 |
| 单位矩阵 |
| 全为1的列向量 |
| 列向量 |
| F范数与2范数 |
| 迹函数 |
| 秩函数 |
| 超参数 |
| 权重 |
方法 | ORL数据库上不同标签比例下分类准确率/% | Yale数据库上不同标签比例下分类准确率/% | ||||||
---|---|---|---|---|---|---|---|---|
10% | 20% | 30% | 40% | 10% | 20% | 30% | 40% | |
LP(1) | 54.03±3.92 | 67.25±3.67 | 73.58±2.95 | 77.27±2.73 | 38.60±5.86 | 53.01±5.04 | 58.63±3.92 | 61.47±3.63 |
LP(2) | 70.89±2.34 | 83.18±2.42 | 88.60±3.21 | 93.10±2.21 | 45.27±8.72 | 63.12±6.78 | 72.22±4.40 | 73.73±4.15 |
LP(3) | 59.79±3.77 | 73.67±2.15 | 80.06±2.12 | 84.59±2.20 | 44.27±7.23 | 63.10±7.45 | 68.96±4.51 | 74.59±3.92 |
AMGL | 85.67±2.00 | 91.22±2.03 | 94.66±1.05 | 96.26±1.42 | 64.72±19.2 | 81.64±4.82 | 83.20±5.28 | 85.20±5.76 |
MVAR | 76.76±1.45 | 90.10±2.72 | 96.35±1.53 | 98.01±1.29 | 60.60±7.43 | 81.71±3.50 | 83.74±2.07 | 87.10±2.84 |
MLAN | 71.00±3.28 | 80.42±2.94 | 85.07±2.84 | 88.63±2.92 | 58.89±15.3 | 70.08±9.89 | 77.92±3.25 | 81.97±4.72 |
FISH-MML | 54.81±3.89 | 67.94±3.30 | 79.46±2.74 | 84.33±2.31 | 40.70±8.35 | 55.56±8.12 | 61.00±7.82 | 65.71±4.26 |
SMCC | 84.17±2.32 | 92.19±1.83 | 95.00±1.52 | 97.08±1.36 | 74.07±9.71 | 82.67±7.82 | 85.33±4.92 | 89.04±3.57 |
表2 不同算法在ORL与Yale数据库中的性能(均值±标准差)
Table 2 Performance (mean±standard deviation) of different algorithms on ORL and Yale databases
方法 | ORL数据库上不同标签比例下分类准确率/% | Yale数据库上不同标签比例下分类准确率/% | ||||||
---|---|---|---|---|---|---|---|---|
10% | 20% | 30% | 40% | 10% | 20% | 30% | 40% | |
LP(1) | 54.03±3.92 | 67.25±3.67 | 73.58±2.95 | 77.27±2.73 | 38.60±5.86 | 53.01±5.04 | 58.63±3.92 | 61.47±3.63 |
LP(2) | 70.89±2.34 | 83.18±2.42 | 88.60±3.21 | 93.10±2.21 | 45.27±8.72 | 63.12±6.78 | 72.22±4.40 | 73.73±4.15 |
LP(3) | 59.79±3.77 | 73.67±2.15 | 80.06±2.12 | 84.59±2.20 | 44.27±7.23 | 63.10±7.45 | 68.96±4.51 | 74.59±3.92 |
AMGL | 85.67±2.00 | 91.22±2.03 | 94.66±1.05 | 96.26±1.42 | 64.72±19.2 | 81.64±4.82 | 83.20±5.28 | 85.20±5.76 |
MVAR | 76.76±1.45 | 90.10±2.72 | 96.35±1.53 | 98.01±1.29 | 60.60±7.43 | 81.71±3.50 | 83.74±2.07 | 87.10±2.84 |
MLAN | 71.00±3.28 | 80.42±2.94 | 85.07±2.84 | 88.63±2.92 | 58.89±15.3 | 70.08±9.89 | 77.92±3.25 | 81.97±4.72 |
FISH-MML | 54.81±3.89 | 67.94±3.30 | 79.46±2.74 | 84.33±2.31 | 40.70±8.35 | 55.56±8.12 | 61.00±7.82 | 65.71±4.26 |
SMCC | 84.17±2.32 | 92.19±1.83 | 95.00±1.52 | 97.08±1.36 | 74.07±9.71 | 82.67±7.82 | 85.33±4.92 | 89.04±3.57 |
方法 | MSRCv1数据库上不同标签比例下分类准确率/% | HW数据库上不同标签比例下分类准确率/% | ||||||
---|---|---|---|---|---|---|---|---|
10% | 20% | 30% | 40% | 10% | 20% | 30% | 40% | |
LP(1) | 62.58±5.30 | 69.27±5.14 | 75.25±3.20 | 78.32±3.44 | 97.08±1.95 | 97.21±1.82 | 97.39±1.21 | 97.50±0.95 |
LP(2) | 62.95±6.65 | 73.29±4.85 | 79.32±4.54 | 82.97±2.14 | 80.22±2.19 | 80.33±1.90 | 80.38±2.07 | 80.93±1.25 |
LP(3) | 59.78±5.37 | 66.42±4.34 | 70.36±2.07 | 71.87±4.66 | 74.67±2.60 | 76.25±2.24 | 77.21±1.31 | 77.67±1.56 |
LP(4) | 60.22±9.99 | 68.91±4.46 | 73.92±3.73 | 75.73±3.41 | 67.89±2.28 | 69.31±1.95 | 69.71±2.52 | 71.17±2.14 |
LP(5) | n/a | n/a | n/a | n/a | 67.83±3.12 | 69.13±2.51 | 69.64±1.89 | 71.08±1.53 |
LP(6) | n/a | n/a | n/a | n/a | 43.89±2.34 | 47.31±1.50 | 47.93±1.02 | 50.92±2.36 |
AMGL | 83.60±3.20 | 88.12±2.80 | 89.56±1.40 | 90.96±1.20 | 90.65±1.63 | 93.45±1.95 | 95.11±1.70 | 96.00±1.26 |
MVAR | 86.46±3.74 | 90.76±1.51 | 91.47±1.71 | 93.38±1.94 | 85.84±2.39 | 88.97±1.68 | 90.28±1.37 | 91.09±0.88 |
MLAN | 83.89±2.72 | 88.75±2.70 | 89.69±1.78 | 91.07±1.72 | 97.59±1.03 | 97.88±0.81 | 97.89±0.58 | 98.05±0.62 |
FISH-MML | 77.78±3.32 | 84.05±1.91 | 86.26±1.52 | 87.38±1.28 | 93.01±0.62 | 94.69±1.15 | 95.56±1.30 | 96.78±1.08 |
SMCC | 89.42±2.97 | 91.31±2.35 | 92.65±1.82 | 93.49±1.75 | 97.61±1.32 | 98.06±0.95 | 98.24±0.65 | 98.35±0.57 |
表3 不同算法在MSRCv1与HW数据库中的性能(均值±标准差)
Table 3 Performance (mean±standard deviation) of different algorithms on MSRCv1 and HW databases
方法 | MSRCv1数据库上不同标签比例下分类准确率/% | HW数据库上不同标签比例下分类准确率/% | ||||||
---|---|---|---|---|---|---|---|---|
10% | 20% | 30% | 40% | 10% | 20% | 30% | 40% | |
LP(1) | 62.58±5.30 | 69.27±5.14 | 75.25±3.20 | 78.32±3.44 | 97.08±1.95 | 97.21±1.82 | 97.39±1.21 | 97.50±0.95 |
LP(2) | 62.95±6.65 | 73.29±4.85 | 79.32±4.54 | 82.97±2.14 | 80.22±2.19 | 80.33±1.90 | 80.38±2.07 | 80.93±1.25 |
LP(3) | 59.78±5.37 | 66.42±4.34 | 70.36±2.07 | 71.87±4.66 | 74.67±2.60 | 76.25±2.24 | 77.21±1.31 | 77.67±1.56 |
LP(4) | 60.22±9.99 | 68.91±4.46 | 73.92±3.73 | 75.73±3.41 | 67.89±2.28 | 69.31±1.95 | 69.71±2.52 | 71.17±2.14 |
LP(5) | n/a | n/a | n/a | n/a | 67.83±3.12 | 69.13±2.51 | 69.64±1.89 | 71.08±1.53 |
LP(6) | n/a | n/a | n/a | n/a | 43.89±2.34 | 47.31±1.50 | 47.93±1.02 | 50.92±2.36 |
AMGL | 83.60±3.20 | 88.12±2.80 | 89.56±1.40 | 90.96±1.20 | 90.65±1.63 | 93.45±1.95 | 95.11±1.70 | 96.00±1.26 |
MVAR | 86.46±3.74 | 90.76±1.51 | 91.47±1.71 | 93.38±1.94 | 85.84±2.39 | 88.97±1.68 | 90.28±1.37 | 91.09±0.88 |
MLAN | 83.89±2.72 | 88.75±2.70 | 89.69±1.78 | 91.07±1.72 | 97.59±1.03 | 97.88±0.81 | 97.89±0.58 | 98.05±0.62 |
FISH-MML | 77.78±3.32 | 84.05±1.91 | 86.26±1.52 | 87.38±1.28 | 93.01±0.62 | 94.69±1.15 | 95.56±1.30 | 96.78±1.08 |
SMCC | 89.42±2.97 | 91.31±2.35 | 92.65±1.82 | 93.49±1.75 | 97.61±1.32 | 98.06±0.95 | 98.24±0.65 | 98.35±0.57 |
[1] | DALAL N, TRIGGS B, SCHMID C. Human detection using oriented histograms of flow and appearance[C]// LNCS 3952: Proceedings of the 9th European Conference on Computer Vision, Graz, May 7-13, 2006. Berlin, Heidelberg: Springer, 2006: 428-441. |
[2] |
LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2):91-110.
DOI URL |
[3] |
OJALA T, PIETIKÄINEN M, MÄENPÄÄ T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24(7):971-987.
DOI URL |
[4] | LI S, FU Y. Robust subspace discovery through supervised low-rank constraints[C]// Proceedings of the 2014 SIAM International Conference on Data Mining, Philadelphia, Apr 24-26, 2014. Philadelphia: SIAM, 2014: 163-171. |
[5] | DING Z M, FU Y. Low-rank common subspace for multi-view learning[C]// Proceedings of the 2014 IEEE International Conference on Data Mining, Shenzhen, Dec 14-17, 2014.Washington: IEEE Computer Society, 2014: 110-119. |
[6] | DING Z M, FU Y. Robust multi-view subspace learning through dual low-rank decompositions[C]// Proceedings of the 30th AAAI Conference on Artificial Intelligence, Phoenix, Feb 12-17, 2016. Menlo Park: AAAI, 2016: 1181-1187. |
[7] |
KAN M, SHAN S, ZHANG H, et al. Multi-view discriminant analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 38(1):188-194.
DOI URL |
[8] | ZHAO H, DING Z, FU Y. Multi-view clustering via deep matrix factorization[C]// Proceedings of the 31st AAAI Con-ference on Artificial Intelligence, San Francisco, Feb 4-9, 2017. Menlo Park: AAAI, 2017: 2921-2927. |
[9] | LU Y, LIU J X, KONG X Z, et al. A convex multi-view low-rank sparse regression for feature selection and clustering[C]// Proceedings of the 2017 IEEE International Conference on Bioinformatics and Biomedicine, Kansas City, Nov 13-16, 2017. Washington: IEEE Computer Society, 2017: 2183-2186. |
[10] |
ZHONG J, WANG N, LIN Q, et al. Weighted feature selection via discriminative sparse multi-view learning[J]. Knowledge Based Systems, 2019, 178:132-148.
DOI URL |
[11] | CAI X, NIE F, HUANG H. Heterogeneous image features integration via multi-modal semi-supervised learning model[C]// Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Dec 1-8, 2013. Washington:IEEE Computer Society, 2013: 1737-1744. |
[12] | HAN L, WU F, JING X Y. Semi-supervised multi-view manifold discriminant intact space learning[J]. KSII Transactions on Internet & Information Systems, 2018, 12(9):4317-4335. |
[13] | BO X, KANG Z, ZHAO Z, et al. Latent multi-view semi-supervised classification[C]// Proceedings of the 11th Asian Conference on Machine Learning, Nagoya, Nov 17-19, 2019: 348-362. |
[14] | LIU S, DING C, JIANG F, et al. Auto-weighted multi-view learning for semi-supervised graph clustering[J]. Neuro-computing, 2019, 362:19-32. |
[15] |
ZHANG L, ZHANG D. Visual understanding via multi-feature shared learning with global consistency[J]. IEEE Transactions on Multimedia, 2015, 18(2):247-259.
DOI URL |
[16] | WANG X B, GUO X J, LEI Z, et al. Exclusivity-consistency regularized multi-view subspace clustering[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 1-9. |
[17] | TAO Y S, YUAN H L, LAI C S, et al. Multi-view collab-orative representation classification[C]// Proceedings of the 2019 International Conference on Machine Learning and Cybernetics, Kobe, Jul 7-10, 2019. Piscataway: IEEE, 2019: 1-6. |
[18] | ZHANG C Q, HU Q H, FU H Z, et al. Latent multi-view subspace clustering[C]// Proceedings of the 2017 IEEE Con-ference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 4333-4341. |
[19] |
DING Z, FU Y. Robust multiview data analysis through collective low-rank subspace[J]. IEEE Transactions on Neural Networks and Learning Systems, 2017, 29(5):1986-1997.
DOI URL |
[20] | ZHU X J, GHAHRAMANI Z, LAFFERTY J D. Semi-supervised learning using Gaussian fields and harmonic functions[C]// Proceedings of the 20th International Conference on Machine Learning, Washington, Aug 21-24, 2003. Menlo Park: AAAI, 2003: 912-919. |
[21] |
NIE F P, XU D, TSANG I H, et al. Flexible manifold embedding: a framework for semi-supervised and unsuper-vised dimension reduction[J]. IEEE Transactions on Image Processing, 2010, 19(7):1921-1932.
DOI URL |
[22] | NIE F P, LI J, LI X L. Parameter-free auto-weighted multiple graph learning: a framework for multiview clustering and semi-supervised classification[C]// Proceedings of the 25th International Joint Conference on Artificial Intelligence, New York, Jul 9-15, 2016. Menlo Park: AAAI, 2016: 1881-1887. |
[23] |
NIE F P, CAI G H, LI J, et al. Auto-weighted multi-view learning for image clustering and semi-supervised classifi-cation[J]. IEEE Transactions on Image Processing, 2017, 27(3):1501-1511.
DOI URL |
[24] |
TAO H, HOU C P, NIE F P, et al. Scalable multi-view semi-supervised classification via adaptive regression[J]. IEEE Transactions on Image Processing, 2017, 26(9):4283-4296.
DOI URL |
[25] | ZHANG C Q, LIU Y Q, LIU Y, et al. FISH-MML: Fisher-HSIC multi-view metric learning[C]// Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Jul 13-19, 2018: 3054-3060. |
[26] |
BOYD S P, PARIKH N, CHU E, et al. Distributed optimi-zation and statistical learning via the alternating direction method of multipliers[J]. Foundations and Trends in Machine Learning, 2011, 3(1):1-122.
DOI URL |
[27] | HE X F, CAI D, YAN S C, et al. Neighborhood preserving embedding[C]// Proceedings of the 10th IEEE International Conference on Computer Vision, Beijing, Oct 17-20, 2005. Washington: IEEE Computer Society, 2005: 1208-1213. |
[28] | MOHAR B. The Laplacian spectrum of graphs[C]// Procee-dings of the 6th Quadrennial International Conference on the Theory and Applications of Graphs, May 30-Jun 3, 1988. New York: John Wiley & Sons, Inc, 1991: 871-898. |
[29] | CHUNG F R K. Spectral graph theory (CBMS regional conference series in mathematics 92) [M]. New York: Cam-bridge University Press, 1997. |
[30] | GRETTON A, BOUSQUET O, SMOLA A J, et al. Meas-uring statistical dependence with Hilbert-Schmidt norms[C]// LNCS 3734: Proceedings of the 16th International Conference on Algorithmic Learning Theory, Singapore, Oct 8-11, 2005. Berlin, Heidelberg: Springer, 2005: 63-77. |
[31] |
FAN K. On a theorem of Weyl concerning eigenvalues of linear transformations: II[J]. Proceedings of the National Academy of Sciences, 1950, 35(1):652-655.
DOI URL |
[32] | LUO S R, ZHANG C Q, ZHANG W, et al. Consistent and specific multi-view subspace clustering[C]// Proceedings of the 32nd AAAI Conference on Artificial Intelligence, the 30th Innovative Applications of Artificial Intelligence, and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence, New Orleans, Feb 2-7, 2018. Menlo Park: AAAI, 2018: 3730-3737. |
[33] |
ZHANG C Q, FU H Z, HU Q H, et al. Flexible multi-view dimensionality coreduction[J]. IEEE Transactions on Image Processing, 2017, 26(2):648-659.
DOI URL |
[34] | CAI X, NIE F P, HUANGE H, et al. Heterogeneous image feature integration via multi-modal spectral clustering[C]// Proceedings of the 24th IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, Jun 20-25, 2011. Washington: IEEE Computer Society, 2011: 1977-1984. |
[1] | 潘玉, 陈晓红, 李舜酩, 李纪永. 块增量典型相关分析[J]. 计算机科学与探索, 2022, 16(8): 1809-1818. |
[2] | 刘雅芬, 郑艺峰, 江铃燚, 李国和, 张文杰. 深度半监督学习中伪标签方法综述[J]. 计算机科学与探索, 2022, 16(6): 1279-1290. |
[3] | 吕昊远, 俞璐, 周星宇, 邓祥. 半监督深度学习图像分类方法研究综述[J]. 计算机科学与探索, 2021, 15(6): 1038-1048. |
[4] | 范瑞东, 侯臣平. 鲁棒自加权的多视图子空间聚类[J]. 计算机科学与探索, 2021, 15(6): 1062-1073. |
[5] | 李会荣, 张林, 赵鹏军, 李超. 带有局部坐标约束的半监督概念分解算法[J]. 计算机科学与探索, 2021, 15(2): 379-388. |
[6] | 张培, 祝恩, 蔡志平. 单步划分融合多视图子空间聚类算法[J]. 计算机科学与探索, 2021, 15(12): 2413-2420. |
[7] | 姚晓红, 黄恒君. 非负半监督函数型聚类方法[J]. 计算机科学与探索, 2021, 15(12): 2438-2448. |
[8] | 辛利柯, 杨琬琪, 杨明. 基于判别稀疏性表示的不完整多视图分类[J]. 计算机科学与探索, 2021, 15(10): 1938-1948. |
[9] | 曹佳伟,钱鹏江. 流形学习与成对约束联合正则化非负矩阵分解[J]. 计算机科学与探索, 2020, 14(7): 1211-1220. |
[10] | 刘莹莹,王士同. 弹力理论传播的半监督学习新方法[J]. 计算机科学与探索, 2020, 14(4): 606-618. |
[11] | 刘少钦,唐爽,赵俊峰,王亚沙,卓琳. 基于扩展主题模型的异常医疗处方检测方法[J]. 计算机科学与探索, 2020, 14(1): 30-39. |
[12] | 刘辰,肖志勇,杜年茂. 改进的卷积神经网络在医学图像分割上的应用[J]. 计算机科学与探索, 2019, 13(9): 1593-1603. |
[13] | 洪敏,贾彩燕,王晓阳. K-means型多视图聚类中的初始化问题研究[J]. 计算机科学与探索, 2019, 13(4): 574-585. |
[14] | 奚臣,钱鹏江,顾晓清,蒋亦樟. 流形与成对约束联合正则化半监督分类方法[J]. 计算机科学与探索, 2017, 11(2): 303-313. |
[15] | 刘行,陈莹. 自适应观测权重的目标跟踪算法[J]. 计算机科学与探索, 2016, 10(7): 1010-1020. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||