Journal of Frontiers of Computer Science and Technology ›› 2022, Vol. 16 ›› Issue (8): 1819-1928.DOI: 10.3778/j.issn.1673-9418.2101001

• Artificial Intelligence • Previous Articles     Next Articles

Ensemble Method of Diverse Regularized Extreme Learning Machines

CHEN Yang+(), WANG Shitong2   

  1. 1. School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu 214122, China
    2. Key Laboratory of Media Design and Software Technology of Jiangsu Province, Jiangnan University, Wuxi, Jiangsu 214122, China
  • Received:2021-01-04 Revised:2021-03-02 Online:2022-08-01 Published:2021-03-25
  • About author:CHEN Yang, born in 1995, M.S. candidate. Her research interests include machine learning and pattern recognition.
    WANG Shitong, born in 1964, professor, Ph.D. supervisor, member of CCF. His research inte-rests include artificial intelligence, pattern recog-nition, etc.
  • Supported by:
    the Natural Science Foundation of Jiangsu Province(BK20191331)

多样性正则化极限学习机的集成方法

陈洋+(), 王士同2   

  1. 1. 江南大学 人工智能与计算机学院,江苏 无锡 214122
    2. 江南大学 江苏省媒体设计与软件技术重点实验室,江苏 无锡 214122
  • 通讯作者: +E-mail: 6191611002@stu.jiangnan.edu.cn
  • 作者简介:陈洋(1995—),女,江苏扬州人,硕士研究生,主要研究方向为机器学习、模式识别。
    王士同(1964—),男,江苏扬州人,教授,博士生导师,CCF会员,主要研究方向为人工智能、模式识别等。
  • 基金资助:
    江苏省自然科学基金(BK20191331)

Abstract:

As a fast training algorithm of single hidden layer forward networks, extreme learning machine (ELM) randomly initializes the input layer weights and hidden layer biases, and gets the weights of output layer through the analysis method. It overcomes many shortcomings of gradient based learning algorithm, such as local minimum, inappropriate learning rate, slow learning speed, etc. However, ELM still inevitably has overfitting and poorly stable phenomenon, especially on large-scale datasets. This paper proposes the ensemble method of diverse regularized extreme learning machines (DRELM) to solve the above problems. First, its own random distribution weigthts are used to assure the diversity between each ELM base learner, then leave-one-out (LOO) cross validation method and M S E P R E S Smethod are used to find the optimal hidden node number of each base learner, calculate the optimal hidden layer output weights to train better and different base learners. Then the new penalty term about diversity is explicitly added to the objective function and the output matrix of each learner is updated iteratively. Finally, the final output of the whole network model is obtained by averaging the output of all base learners. This method can effectively realize the ensemble of regularized extreme learning machines (RELM) with both accuracy and diversity. Experimental results on 10 UCI datasets indicate the effectiveness of DRELM.

Key words: extreme learning machine (ELM), ensemble learning, diversity, regularized extreme learning machines (RELM)

摘要:

极限学习机(ELM)是一种单隐层前向网络的训练算法,随机确定输入层权值和隐含层偏置,通过分析的方法确定输出层的权值,ELM克服了基于梯度的学习算法的很多不足,如局部极小、不合适的学习速率、学习速度慢等,却不可避免地造成了过拟合的隐患且稳定性较差,特别是对于规模较大的数据集。针对上述问题,提出多样性正则化极限学习机(DRELM)的集成方法。首先,从改变隐层节点参数的分布来为每个ELM随机选取输入权重,采用LOO交叉验证方法和 M S E P R E S S方法来寻找每个基学习器的最优隐节点数,计算并输出最优隐含层输出权重,训练出较好且具有差异性的基学习器。然后,将有关多样性的新惩罚项显式添加到整个目标函数中,迭代更新每个基学习器的隐含层输出权重并输出结果。最后,集成所有基学习器的输出结果对其求平均值,得到整个网络模型最后的输出结果。该方法能够有效地实现多样性正则化极限学习机(RELM)的融合,兼顾准确率和多样性。在10个不同规模的UCI数据集上的实验结果表明所提出的方法是行之有效的。

关键词: 极限学习机(ELM), 集成学习, 多样性, 正则化极限学习机(RELM)

CLC Number: