Unsupervised evaluation of pre-trained DNA language model embeddings
Abstract Background DNA Language Models (DLMs) have generated a lot of hope and hype for solving complex genetics tasks. These models have demonstrated remarkable performance in tasks like gene finding, enhancer annotation, and histone modification. However, they have struggled with tasks such as le...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
BMC
2025-08-01
|
| Series: | BMC Genomics |
| Subjects: | |
| Online Access: | https://doi.org/10.1186/s12864-025-11913-2 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Background DNA Language Models (DLMs) have generated a lot of hope and hype for solving complex genetics tasks. These models have demonstrated remarkable performance in tasks like gene finding, enhancer annotation, and histone modification. However, they have struggled with tasks such as learning individual-level personal transcriptome variation, highlighting the need for robust evaluation approaches. Current evaluation approaches assess models based on multiple downstream tasks, which are computationally demanding and fail to evaluate their ability to learn as generalist agents. Results We propose a framework to evaluate DLM embeddings using unsupervised numerical linear algebra-based metrics RankMe, NESum, and StableRank. Embeddings were generated from six state-of-the-art DLMs: Nucleotide Transformer, DNA-BERT2, HyenaDNA, MistralDNA, GENA-LM, and GROVER, across multiple genomic benchmark datasets. Our analysis revealed several key insights. First, low pairwise Pearson correlations and limited variance captured by the top principal components suggest that DLM embeddings are high-dimensional and non-redundant. Second, GENA-LM frequently demonstrated strong performance across all unsupervised evaluation metrics, often outperforming other models. Third, while all models performed well on supervised classification tasks, GENA-LM achieved the highest accuracy and F1 scores across most datasets. Importantly, we observed a positive correlation between unsupervised metrics and supervised performance, supporting the utility of unsupervised metrics as effective proxies for model quality assessment. Conclusion This study introduces a computationally efficient framework for evaluating DLMs. Our results show that GENA-LM, DNA-BERT2, and Nucleotide Transformer frequently outperform HyenaDNA and Mistral across both unsupervised and supervised evaluations. Moreover, the observed positive correlations between unsupervised metrics and downstream classification performance highlight the potential of these metrics as effective proxies for assessing model quality. |
|---|---|
| ISSN: | 1471-2164 |