计算机科学与探索 ›› 2024, Vol. 18 ›› Issue (2): 320-344.DOI: 10.3778/j.issn.1673-9418.2310092

• 前沿·综述 • 上一篇    下一篇

面向图像分类的视觉Transformer研究进展

彭斌,白静,李文静,郑虎,马向宇   

  1. 1. 北方民族大学 计算机科学与工程学院,银川 750021
    2. 图像图形智能信息处理国家民委重点实验室,银川 750021
  • 出版日期:2024-02-01 发布日期:2024-02-01

Survey on Visual Transformer for Image Classification

PENG Bin, BAI Jing, LI Wenjing, ZHENG Hu, MA Xiangyu   

  1. 1. School of Computer Science and Engineering, North Minzu University, Yinchuan 750021, China
    2. Key Laboratory for Intelligent Processing of Computer Images and Graphics of National Ethnic Affairs Commission of the PRC, Yinchuan 750021, China
  • Online:2024-02-01 Published:2024-02-01

摘要: Transformer是一种基于自注意力机制的深度学习模型,在计算机视觉中展现出巨大的潜力。而在图像分类任务中,关键的挑战是高效而准确地捕捉输入图片的局部和全局特征。传统方法使用卷积神经网络的底层提取其局部特征,并通过卷积层堆叠扩大感受野以获取图像的全局特征。但这种策略在相对短的距离内聚合信息,难以建立长期依赖关系。相比之下,Transformer的自注意力机制通过直接比较特征在所有空间位置上的相关性,捕捉了局部和全局的长距离依赖关系,具备更强的全局建模能力。因此,深入探讨Transformer在图像分类任务中的问题是非常有必要的。首先以Vision Transformer为例,详细介绍了Transformer的核心原理和架构。然后以图像分类任务为切入点,围绕与视觉Transformer研究中的性能提升、计算成本和训练优化相关的三个重要方面,总结了视觉Transformer研究中的关键问题和最新进展。此外,总结了Transformer在医学图像、遥感图像和农业图像等多个特定领域的应用情况。这些领域中的应用展示了Transformer的多功能性和通用性。最后,通过综合分析视觉Transformer在图像分类方面的研究进展,对视觉Transformer的未来发展方向进行了展望。

关键词: 深度学习, 视觉Transformer, 网络架构, 图像分类, 自注意力机制

Abstract: Transformer is a deep learning model based on the self-attention mechanism, showing tremendous potential in computer vision. In image classification tasks, the key challenge lies in efficiently and accurately capturing both local and global features of input images. Traditional approaches rely on convolutional neural networks to extract local features at the lower layers, expanding the receptive field through stacked convolutional layers to obtain global features. However, this strategy aggregates information over relatively short distances, making it difficult to model long-term dependencies. In contrast, the self-attention mechanism of Transformer directly compares features across all spatial positions, capturing long-range dependencies at both local and global levels and exhibiting stronger global modeling capabilities. Therefore, a thorough exploration of the challenges faced by Transformer in image classification tasks is crucial. Taking Vision Transformer as an example, this paper provides a detailed overview of the core principles and architecture of Transformer. It then focuses on image classification tasks, summarizing key issues and recent advancements in visual Transformer research related to performance enhancement, computational costs, and training optimization. Furthermore, applications of Transformer in specific domains such as medical imagery, remote sensing, and agricultural images are summarized, highlighting its versatility and generality. Finally, a comprehensive analysis of the research progress in visual Transformer for image classification is presented, offering insights into future directions for the development of visual Transformer.

Key words: deep learning, Vision Transformer, network structure, image classification, self-attention mechanism