Knowledge Distillation for Face Recognition Using Synthetic Data With Dynamic Latent Sampling
State-of-the-art face recognition models are computationally expensive for mobile applications. Training lightweight face recognition models also requires large identity-labeled datasets, raising privacy and ethical concerns. Generating synthetic datasets for training is also challenging, and there...
Saved in:
Main Authors: | Hatef Otroshi Shahreza, Anjith George, Sebastien Marcel |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2024-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10766575/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Leveraging Lightweight Hybrid Ensemble Distillation (HED) for Suspect Identification With Face Recognition
by: Vaishnavi Munusamy, et al.
Published: (2025-01-01) -
Transferability analysis of adversarial attacks on gender classification to face recognition: Fixed and variable attack perturbation
by: Zohra Rezgui, et al.
Published: (2022-09-01) -
Evaluating the Impact of Face Anonymization Methods on Computer Vision Tasks: A Trade-Off Between Privacy and Utility
by: Roland Stenger, et al.
Published: (2025-01-01) -
Historical Blurry Video-Based Face Recognition
by: Lujun Zhai, et al.
Published: (2024-09-01) -
JEP-KD: Joint-Embedding Predictive Architecture Based Knowledge Distillation for Visual Speech Recognition
by: Chang Sun, et al.
Published: (2024-01-01)