Deep representation learning using layer-wise VICReg losses

Abstract This paper presents a layer-wise training procedure of neural networks by minimizing a Variance-Invariance-Covariance Regularization (VICReg) loss at each layer. The procedure is beneficial when annotated data are scarce but enough unlabeled data are present. Being able to update the parame...

Full description

Saved in:
Bibliographic Details
Main Authors: Joy Datta, Rawhatur Rabbi, Puja Saha, Aniqua Nusrat Zereen, M. Abdullah-Al-Wadud, Jia Uddin
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-08504-2
Tags: Add Tag
No Tags, Be the first to tag this record!