Viewport prediction with cross modal multiscale transformer for 360° video streaming
Abstract In the realm of immersive video technologies, efficient 360° video streaming remains a challenge due to the high bandwidth requirements and the dynamic nature of user viewports. Most existing approaches neglect the dependencies between different modalities, and personal preferences are rare...
Saved in:
| Main Authors: | Yangsheng Tian, Yi Zhong, Yi Han, Fangyuan Chen |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-08-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-16011-7 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
ARMC-RL: Adaptive Caching With Reinforcement Learning for Efficient 360° Video Streaming in Edge Networks
by: Minji Choi, et al.
Published: (2025-01-01) -
Transformer-based latency prediction for stream processing task
by: Zheng Chu, et al.
Published: (2025-07-01) -
SN360: Semantic and Surface Normal Cascaded Multi-Task 360 Monocular Depth Estimation
by: Payal Mohadikar, et al.
Published: (2025-01-01) -
Auge y declive del vídeo 360 grados: evolución y características de la producción inmersiva en los medios de servicio público europeos (2015-2023)
by: Sara Pérez-Seijo
Published: (2024-07-01) -
Convolutional Neural Networks for Continuous QoE Prediction in Video Streaming Services
by: Tho Nguyen Duc, et al.
Published: (2020-01-01)