Journal of Frontiers of Computer Science and Technology ›› 2023, Vol. 17 ›› Issue (6): 1441-1452.DOI: 10.3778/j.issn.1673-9418.2112032

• Artificial Intelligence·Pattern Recognition • Previous Articles     Next Articles

Transfer Learning Boosting for Weight Optimization Under Multi-source Domain Distribution

LI Yunbo, WANG Shitong   

  1. 1. School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu 214122, China
    2. Jiangsu Key Construction Laboratory of IoT Application Technology, Jiangnan University, Wuxi, Jiangsu 214122, China
  • Online:2023-06-01 Published:2023-06-01

多源域分布下优化权重的迁移学习Boosting方法

李赟波,王士同   

  1. 1. 江南大学 人工智能与计算机学院,江苏 无锡 214122
    2. 江南大学 江苏省物联网应用技术重点建设实验室,江苏 无锡 214122

Abstract: The deep decision tree transfer learning Boosting method (DTrBoost) can only adapt to the training data of one source domain and one target domain, and can not adapt to the samples of several different distribution source domains. In addition, the DTrBoost method synchronously learns data from the source domain to the target domain model, and does not quantify the weight of the learned knowledge according to the degree of importance. In practice, the distribution of data divided according to one or some characteristics of a certain dataset is often inconsistent, the importance of these different distributions to the final model is also inconsistent, and the weight of knowledge transfer is therefore not equal. To solve this problem, a transfer learning method of multi-source domain weight optimization is proposed. The main idea is to calculate the KL divergence distance to the target domain according to the source domain space of different distributions, and calculate the learning weight proportion parameters of the source domain samples of different distributions by using the ratio of KL divergence, so as to optimize the overall gradient function and make the learning direction towards the direction of the fastest gradient decline. The gradient descent algorithm can make the model converge quickly, and ensure the learning speed as well as the transfer learning effect. Experimental results show that the algorithm proposed in this paper adaptively achieves better average performance on the whole. The average classification error rate on all the adopted datasets decreases by 0.013 and even 0.030 on OCR dataset.

Key words: deep decision tree transfer learning Boosting method (DTrBoost), multi-source domain transfer learning, KL divergence, decision tree

摘要: 深度决策树迁移学习Boosting方法(DTrBoost)仅能适应一个源域与一个目标域的训练数据,无法适应多个不同分布的源域的样本。此外,DTrBoost方法同步地从源域中学习数据至目标域模型,并没有根据重要程度量化学习知识的权重。在实践中,对于某数据集的数据按照某一或某些特征划分出来的数据往往分布不一致,并且这些不同分布的数据对于最终模型的重要性也不一致,知识迁移的权重也因此不平等。针对这一问题,提出了多源域优化权重的迁移学习方法,主要思想是根据不同分布的源域空间计算出到目标域的KL距离,利用KL距离的比值计算出不同分布的源域样本的学习权重比例,从而优化整体梯度函数,使学习方向朝着梯度下降最快的方向进行。使用梯度下降算法能使模型较快收敛,在确保迁移学习效果的同时,也能确保学习的速度。实验结果表明,提出的算法在整体上实现了更好的性能并且对于不同的训练数据能够实现自适应效果,分类错误率平均下降0.013,在效果最好的OCR数据集上下降0.030。

关键词: 深度决策树迁移学习Boosting方法(DTrBoost), 多源域迁移学习, KL距离, 决策树