A multimodal visual–language foundation model for computational ophthalmology
Abstract Early detection of eye diseases is vital for preventing vision loss. Existing ophthalmic artificial intelligence models focus on single modalities, overlooking multi-view information and struggling with rare diseases due to long-tail distributions. We propose EyeCLIP, a multimodal visual-la...
Saved in:
| Main Authors: | Danli Shi, Weiyi Zhang, Jiancheng Yang, Siyu Huang, Xiaolan Chen, Pusheng Xu, Kai Jin, Shan Lin, Jin Wei, Mayinuer Yusufu, Shunming Liu, Qing Zhang, Zongyuan Ge, Xun Xu, Mingguang He |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-06-01
|
| Series: | npj Digital Medicine |
| Online Access: | https://doi.org/10.1038/s41746-025-01772-2 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
DeepSeek-R1 outperforms Gemini 2.0 Pro, OpenAI o1, and o3-mini in bilingual complex ophthalmology reasoning
by: Pusheng Xu, et al.
Published: (2025-08-01) -
Tackling visual impairment: emerging avenues in ophthalmology
by: Fang Lin, et al.
Published: (2025-04-01) -
Neuro-ophthalmology and migraine: visual aura and its neural basis
by: Hajar Nasir Tukur, et al.
Published: (2025-08-01) -
A study of ophthalmological evaluation and visual rehabilitation of intellectually disabled females
by: Vaishali Lalit Une, et al.
Published: (2025-07-01) -
Healthy lifestyle habits, educational attainment, and the risk of 45 age-related health and mortality outcomes in the UK: A prospective cohort study
by: Yu Huang, et al.
Published: (2025-05-01)