Journal of Frontiers of Computer Science and Technology ›› 2016, Vol. 10 ›› Issue (6): 891-900.DOI: 10.3778/j.issn.1673-9418.1508071

Previous Articles    

Improved Teaching-Learning Based Optimization Algorithm with Conjugate Gradient Methods and Second Study

WANG Peichong1,2+, PENG Feifei3, QIAN Xu2   

  1. 1. School of Information Engineering, Hebei Dizhi University, Shijiazhuang 050031, China
    2. School of Mechanical Electronic and Information Engineering, China University of Mining and Technology, Beijing 100083, China
    3. Library, Beijing University of Posts and Telecommunications, Beijing 100083, China
  • Online:2016-06-01 Published:2016-06-07

嵌入共轭梯度的二次学习教与学优化算法

王培崇1,2+,彭菲菲3,钱  旭2   

  1. 1. 河北地质大学 信息工程学院,石家庄 050031
    2. 中国矿业大学(北京) 机电与信息工程学院,北京 100083
    3. 北京邮电大学 图书馆,北京 100083

Abstract: Teaching-learning based optimization (TLBO) algorithm can solve the complex optimization problems by simulating teaching and learning, and has been applied in many fields. To overcome the weakness of premature, low precision of solution, this paper proposes an improved hybrid chaotic and conjugate gradient method TLBO. This improved algorithm initializes its population by Chebyshev mapping to improve bestrow rate of solution space. To keep the diversity of population, this paper introduces a dynamic learning coefficient to make that students can mainly learn from teacher in early, and improves gradually the ability of affecting itself evolvement of student??s knowledge. In its iteration, teacher individual executes conjugate gradient methods after “teaching” and “learning”. If the states of those student indivduals which are worst can not be changed, second study based on OBL (opposition-based learning) and GL (Gaussian learning) will be executed to them. Finally, some experiments on test functions show that the improved TLBO algorithm has higher precision and better global convergence than TLBO, fitting to solve higher dimension optimization functions.

Key words: teaching-learning based optimization, Chebyshev mapping, dynamic self-adaptive learning, conjugate gradient method, second study

摘要: 教与学优化算法通过模拟自然班的教与学行为实现复杂问题的求解,已经得到较为广泛的应用。为了克服该算法容易早熟,解精度低的弱点,提出了一种改进的混合混沌共轭梯度法教与学优化算法。改进算法应用Chebyshev混沌映射初始化种群,以提高初始种群对解空间的覆盖。为了保持种群多样性,引入动态学习因子,使学生个体能够在早期主要向教师学习,并逐渐提高个人知识对其进化的影响比例。每次迭代后,教师个体将执行共轭梯度搜索。种群内适应度较差的学生个体如果长时间状态难以改变,则基于反向学习和高斯学习进行二次学习优化。最后在多个典型测试函数上的实验表明,改进算法对比相关算法具有较佳的全局收敛性,解精度较高,适用于求解较高维的函数优化问题。

关键词: 教与学优化算法, Chebyshev映射, 动态自适应学习, 共轭梯度法, 二次学习