Discrete variational autoencoders for synthetic nighttime visible satellite imagery
Visible satellite imagery (VIS) is essential for monitoring weather patterns and tracking ground surface changes associated with climate change. However, its availability is limited during nighttime. To address this limitation, we present a discrete variational autoencoder (VQVAE) method for transla...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Cambridge University Press
2025-01-01
|
| Series: | Environmental Data Science |
| Subjects: | |
| Online Access: | https://www.cambridge.org/core/product/identifier/S2634460225100150/type/journal_article |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Visible satellite imagery (VIS) is essential for monitoring weather patterns and tracking ground surface changes associated with climate change. However, its availability is limited during nighttime. To address this limitation, we present a discrete variational autoencoder (VQVAE) method for translating infrared satellite imagery to VIS. This method departs from previous efforts that utilize a U-Net architecture. By removing the connections between corresponding layers of the encoder and decoder, the model learns a discrete and rich codebook of latent priors for the translation task. We train and test our model on mesoscale data from the Geostationary Operational Environmental Satellite (GOES) West Advanced Baseline Imager (ABI) sensor, spanning 4 years (2019 to 2022) using the Conditional Generative Adversarial Nets (CGAN) framework. This work demonstrates the practical use of a VQVAE for meteorological satellite image translation. Our approach provides a modular framework for data compression and reconstruction, with a latent representation space specifically designed for handling meteorological satellite imagery. |
|---|---|
| ISSN: | 2634-4602 |