Nonconvex Nonlinear Transformation of Low-Rank Approximation for Tensor Completion

Recovering incomplete high-dimensional data to create complete and valuable datasets is the main focus of tensor completion research, which lies at the intersection of mathematics and information science. Researchers typically apply various linear and nonlinear transformations to the original tensor...

Full description

Saved in:
Bibliographic Details
Main Authors: Yifan Mei, Xinhua Su, Huixiang Lin, Huanmin Ge
Format: Article
Language:English
Published: MDPI AG 2024-12-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/14/24/11895
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recovering incomplete high-dimensional data to create complete and valuable datasets is the main focus of tensor completion research, which lies at the intersection of mathematics and information science. Researchers typically apply various linear and nonlinear transformations to the original tensor, using regularization terms like the nuclear norm for low-rank approximation. However, relying solely on the tensor nuclear norm can lead to suboptimal solutions because of the convex relaxation of tensor rank, which strays from the original outcomes. To tackle these issues, we introduce the low-rank approximation nonconvex nonlinear transformation (LRANNT) method. By employing nonconvex norms and nonlinear transformations, we can more accurately capture the intrinsic structure of tensors, providing a more effective solution to the tensor completion problem. Additionally, we propose the proximal alternating minimization (PAM) algorithm to solve the model, demonstrating its convergence. Tests on publicly available datasets demonstrate that our method outperforms the current state-of-the-art approaches, even under extreme conditions with a high missing rate of up to 97.5%.
ISSN:2076-3417