Journal of Frontiers of Computer Science and Technology ›› 2024, Vol. 18 ›› Issue (8): 2034-2048.DOI: 10.3778/j.issn.1673-9418.2306078

• Theory·Algorithm • Previous Articles     Next Articles

Tensor Completion Using Self-Adaptive Transforms and Non-convex Relaxation

LIU Jiahui, ZHU Yulian   

  1. 1. College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
    2. Fundamental Experimental Teaching Department, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
  • Online:2024-08-01 Published:2024-07-29

自适应变换结合非凸松弛的张量补全

刘佳慧,朱玉莲   

  1. 1. 南京航空航天大学 计算机科学与技术学院,南京 211106
    2. 南京航空航天大学 公共实验教学部,南京 211106

Abstract: The common point of many tensor completion methods is to firstly project the tensor into the transformed domain by a pre-defined transform, and then describe the low-rankness or sparsity of the tensor in the transformed domain (shortened to the transformed tensor). However, the pre-defined transform is not general. To address this problem, a new tensor average rank under self-adaptive transforms is firstly proposed, where the transformed tensor is eventually solved by continuous iterations, and the newly defined tensor average rank is an extension of the tensor average rank under invertible linear transforms. Then, this paper proposes a tensor completion model based on self-adaptive transforms and non-convex relaxation. The self-adaptation means that the transformed tensor is the unknown tensor to be solved, and it can continuously adjust itself based on the observed tensor during the process of minimizing the objective function until it becomes the optimal solution of the objective function. The model uses a non-convex surrogate to approximate the tensor average rank under self-adaptive transforms and adopts the [l1] norm to measure the sparsity of the transformed tensor. In the process of solving the optimal solution through the proximal alternating minimization framework, the model adaptively learns the transformed low-rank tensor and the transformed sparse tensor based on the observed tensor, and then converts the transformed low-rank tensor and the transformed sparse tensor into the original space through the learned transform matrices, respectively. Finally, the completed tensor is obtained. Experiments are carried out on grey-scale videos, multispectral images and hyperspectral images. Experimental results demonstrate that the proposed method further improves the completion performance compared with other representative tensor completion methods.

Key words: self-adaptive transforms, non-convex relaxation, proximal alternating minimization, tensor completion

摘要: 许多张量补全方法的共同点是首先通过预定义的变换将张量投影至变换域中,然后刻画变换域中张量(简记为变换张量)的低秩性或稀疏性,但是预定义的变换并不具备一般性。针对这一问题,提出了一个基于自适应变换的张量均秩,该秩的定义是基于可逆线性变换的张量均秩的一个扩展;提出了一种自适应变换结合非凸松弛的张量补全模型。自适应体现在变换张量是未知的待求解张量,它可以基于观测张量在最小化目标函数的过程中不断进行自身的调整,直至成为目标函数的最优解。该模型使用非凸替代近似估计基于自适应变换的张量均秩,并采用[l1]范数衡量变换张量的稀疏性。在通过近端交替最小化的框架求解最优解的过程中,该模型根据观测的张量自适应地学习变换低秩张量和变换稀疏张量,再通过学习到的变换矩阵分别将变换低秩张量和变换稀疏张量转化到原始空间,最终得到补全后的张量。在灰度视频、多光谱图像和高光谱图像上进行了实验,将该方法与其他代表性的张量补全方法相比较,实验结果表明该方法进一步提升了补全的性能。

关键词: 自适应变换, 非凸松弛, 近端交替最小化, 张量补全