计算机科学与探索 ›› 2012, Vol. 6 ›› Issue (5): 430-442.DOI: 10.3778/j.issn.1673-9418.2012.05.005

• 学术研究 • 上一篇    下一篇

关系tri-training:利用无标记数据学习一阶规则

李艳娟1,2+,郭茂祖1   

  1. 1. 哈尔滨工业大学 计算机科学与技术学院,哈尔滨 150001
    2. 东北林业大学 信息与计算机工程学院,哈尔滨 150040
  • 出版日期:2012-05-01 发布日期:2012-05-09

Relational-Tri-Training: Learning First-Order Rules Exploiting Unlabeled Data

LI Yanjuan1,2+, GUO Maozu1   

  1. 1. School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
    2. School of Information and Computer Engineering, Northeast Forestry University, Harbin 150040, China
  • Online:2012-05-01 Published:2012-05-09

摘要: 针对目前归纳逻辑程序设计(inductive logic programming,ILP)系统要求训练数据充分且无法利用无标记数据的不足,提出了一种利用无标记数据学习一阶规则的算法——关系tri-training(relational-tri-training,R-tri-training)算法。该算法将基于命题逻辑表示的半监督学习算法tri-training的思想引入到基于一阶逻辑表示的ILP系统,在ILP框架下研究如何利用无标记样例信息辅助分类器训练。R-tri-training算法首先根据标记数据和背景知识初始化三个不同的ILP系统,然后迭代地用无标记样例对三个分类器进行精化,即如果两个分类器对一个无标记样例的标记结果一致,则在一定条件下该样例将被标记给另一个分类器作为新的训练样例。标准数据集上实验结果表明:R-tri-training能有效地利用无标记数据提高学习性能,且R-tri-training算法性能优于GILP(genetic inductive logic programming)、NFOIL、KFOIL和ALEPH。

关键词: 机器学习, 归纳逻辑程序设计(ILP), 关系tri-training, 概率近似正确(PAC)可学习

Abstract: For the current inductive logic programming (ILP) system, the sufficient training datasets are required and the unlabeled data cannot be used. To solve this limitation, this paper introduces a first-order rule-learning algorithm exploiting the unlabeled data, named relational-tri-training (R-tri-training). This algorithm combines the tri-training based on propositional logic representation and ILP based on first-order logic representation, investigates the issue how to improve the performance of classifiers using the unlabeled data under the framework of ILP. Three different ILP systems are initialized according to the labeled data and the background knowledge, and then the three classifiers are refined by iteratively using the unlabeled data. That is, under special condition, the unlabeled data are going to be labeled to one classifier as the new training data when the same labeled results are given by the other two classifiers. Experimental results on the well-known benchmarks show that R-tri-training can effectively enhance the learning performance by exploiting the unlabeled data, and the performance of R-tri-training is better than genetic inductive logic programming (GILP), NFOIL, KFOIL and ALEPH.

Key words: machine learning, inductive logic programming (ILP), relational-tri-training, probability approximately correct (PAC) learning