Knowledge Distillation for Molecular Property Prediction: A Scalability Analysis
Abstract Knowledge distillation (KD) is a powerful model compression technique that transfers knowledge from complex teacher models to compact student models, reducing computational costs while preserving predictive accuracy. This study investigated KD's efficacy in molecular property predictio...
Saved in:
| Main Authors: | Rahul Sheshanarayana, Fengqi You |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2025-06-01
|
| Series: | Advanced Science |
| Subjects: | |
| Online Access: | https://doi.org/10.1002/advs.202503271 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Distilling knowledge from graph neural networks trained on cell graphs to non-neural student models
by: Vasundhara Acharya, et al.
Published: (2025-08-01) -
Dynamic subgraph pruning and causal-aware knowledge distillation for temporal knowledge graphs
by: Qian Liu, et al.
Published: (2025-07-01) -
SYNTHESIS OF THE THERMALLY COUPLED DISTILLATION SEQUENCES
by: E. А. Anokhina, et al.
Published: (2017-12-01) -
Code summarization based on large model knowledge distillation
by: YOU Gang, LIU Wenjie, LI Meipeng, SUN Liqun, WANG Lian, TIAN Tieku
Published: (2025-08-01) -
Real-time aerial fire detection on resource-constrained devices using knowledge distillation
by: Sabina Jangirova, et al.
Published: (2025-08-01)