multiDGD: A versatile deep generative model for multi-omics data

Abstract Recent technological advancements in single-cell genomics have enabled joint profiling of gene expression and alternative modalities at unprecedented scale. Consequently, the complexity of multi-omics data sets is increasing massively. Existing models for multi-modal data are typically limi...

Full description

Saved in:
Bibliographic Details
Main Authors: Viktoria Schuster, Emma Dann, Anders Krogh, Sarah A. Teichmann
Format: Article
Language:English
Published: Nature Portfolio 2024-11-01
Series:Nature Communications
Online Access:https://doi.org/10.1038/s41467-024-53340-z
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Recent technological advancements in single-cell genomics have enabled joint profiling of gene expression and alternative modalities at unprecedented scale. Consequently, the complexity of multi-omics data sets is increasing massively. Existing models for multi-modal data are typically limited in functionality or scalability, making data integration and downstream analysis cumbersome. We present multiDGD, a scalable deep generative model providing a probabilistic framework to learn shared representations of transcriptome and chromatin accessibility. It shows outstanding performance on data reconstruction without feature selection. We demonstrate on several data sets from human and mouse that multiDGD learns well-clustered joint representations. We further find that probabilistic modeling of sample covariates enables post-hoc data integration without the need for fine-tuning. Additionally, we show that multiDGD can detect statistical associations between genes and regulatory regions conditioned on the learned representations. multiDGD is available as an scverse-compatible package on GitHub.
ISSN:2041-1723