Machine learning-based compression of quantum many body physics: PCA and autoencoder representation of the vertex function
Theoretical approaches to quantum many-body physics require developing compact representations of the complexity of generic quantum states. This paper explores an interpretable data-driven approach utilizing principal component analysis (PCA) and autoencoder neural networks to compress the two-parti...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IOP Publishing
2024-01-01
|
| Series: | Machine Learning: Science and Technology |
| Subjects: | |
| Online Access: | https://doi.org/10.1088/2632-2153/ad9f20 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Theoretical approaches to quantum many-body physics require developing compact representations of the complexity of generic quantum states. This paper explores an interpretable data-driven approach utilizing principal component analysis (PCA) and autoencoder neural networks to compress the two-particle vertex, a key element in Feynman diagram approaches. We show that the linear PCA offers more physical insight and better out-of-distribution generalization than the nominally more expressive autoencoders. Even with ∼10–20 principal components, we find excellent reconstruction across the phase diagram suggesting the existence of heretofore unrealized structures in the diagrammatic theory. We show that the principal components needed to describe the ferromagnetic state are not contained in the low rank description of the Fermi liquid (FL) state, unlike those for antiferromagnetic and superconducting states, suggesting that the latter two states emerge from pre-existing fluctuations in the FL while ferromagnetism is driven by a different process. |
|---|---|
| ISSN: | 2632-2153 |