计算机科学与探索 ›› 2022, Vol. 16 ›› Issue (11): 2557-2564.DOI: 10.3778/j.issn.1673-9418.2104121

• 人工智能 • 上一篇    下一篇

动态一致自信的深度半监督学习

李勇1,2, 高灿1,2,+(), 刘子荣1,2, 罗金涛1,2   

  1. 1.深圳大学 计算机与软件学院,广东 深圳 518060
    2.广东省智能信息处理重点实验室,广东 深圳 518060
  • 收稿日期:2021-05-08 修回日期:2021-06-25 出版日期:2022-11-01 发布日期:2021-05-31
  • 通讯作者: + E-mail: 2005gaocan@163.com
  • 作者简介:李勇(1996—),男,河南周口人,硕士研究生,主要研究方向为半监督学习、计算机视觉。
    高灿(1983—),男,湖南南县人,博士,副研究员,硕士生导师,主要研究方向为机器学习、计算机视觉。
    刘子荣(1997—),男,广东东莞人,硕士研究生,主要研究方向为机器学习、计算机视觉。
    罗金涛(1997—),男,广东梅州人,硕士研究生,主要研究方向为半监督学习、计算机视觉。
  • 基金资助:
    国家自然科学基金(61806127);国家自然科学基金(62076164);佛山市教育局项目(2019XJZZ05)

Dynamically Consistent and Confident Deep Semi-supervised Learning

LI Yong1,2, GAO Can1,2,+(), LIU Zirong1,2, LUO Jintao1,2   

  1. 1. College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong 518060, China
    2. Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen, Guangdong 518060, China
  • Received:2021-05-08 Revised:2021-06-25 Online:2022-11-01 Published:2021-05-31
  • About author:LI Yong, born in 1996, M.S. candidate. His research interests include semi-supervised learning and computer vision.
    GAO Can, born in 1983, Ph.D., associate researcher, M.S. supervisor. His research interests include machine learning and computer vision.
    LIU Zirong, born in 1997, M.S. candidate. His research interests include machine learning and computer vision.
    LUO Jintao, born in 1997, M.S. candidate. His research interests include semi-supervised learning and computer vision.
  • Supported by:
    National Natural Science Foundation of China(61806127);National Natural Science Foundation of China(62076164);Project of Bureau of Education of Foshan(2019XJZZ05)

摘要:

基于一致性正则化和熵最小化的深度半监督学习方法可以有效提升大型神经网络的性能,减少对标记数据的需求。然而,现有一致性正则化方法的正则损失没有考虑样本之间的差异及错误预测的负面影响,而熵最小化方法则不能灵活调节预测概率分布。首先,针对样本之间的差别以及错误预测带来的负面影响,提出了新的一致性损失函数,称为动态加权一致性正则化(DWCR),可以实现对无标记数据一致性损失的动态加权。其次,为了进一步调节预测概率分布,提出了新的促进低熵预测的损失函数,称为自信促进损失(SCPL),能灵活调节促进模型输出低熵预测的强度,实现类间的低密度分离,提升模型的分类性能。最后,结合动态加权一致性正则化、自信促进损失与有监督损失,提出了名为动态一致自信(DCC)的深度半监督学习方法。多个数据集上的实验表明,所提出方法的分类性能优于目前较先进的深度半监督学习算法。

关键词: 深度半监督学习, 图像分类, 动态加权一致性, 自信预测, 低密度分离

Abstract:

Deep semi-supervised learning methods based on consistency regularization and entropy minimization can effectively improve the performance of large-scale neural networks and reduce the need for labeled data. However, the regularization losses of the existing consistency regularization methods do not consider the differences between samples and the negative impact of mislabeled predictions, whereas the entropy minimization methods cannot flexibly adjust the prediction probability distribution. Firstly, to alleviate the negative impact of discrepancies between unlabeled samples and mislabeled predictions, a new consistency loss function named dynamically weighted consistency regularization (DWCR) is proposed, which can dynamically weigh the consistency loss of unlabeled samples. Then, to further adjust the prediction probability distribution, a new loss function called self-confidence promotion loss (SCPL) is proposed, which can flexibly adjust the strength of the model to generate low entropy prediction and achieve low-density separation between classes, thus improving classification performance. Finally, a deep semi-supervised learning method named dynamic consistency and confidence (DCC) is proposed by combining the DWCR, SPCL and supervised loss. Experiments on several datasets show that the proposed method achieves better classification performance than the state-of-the-art deep semi-supervised learning methods.

Key words: deep semi-supervised learning, image classification, dynamic weighted consistency, confident prediction, low density separation

中图分类号: