Supervised Contrastive Learning for 3D Cross-Modal Retrieval
Interoperability between different virtual platforms requires the ability to search and transfer digital assets across platforms. Digital assets in virtual platforms are represented in different forms or modalities, such as images, meshes, and point clouds. The cross-modal retrieval of three-dimensi...
Saved in:
Main Authors: | Yeon-Seung Choo, Boeun Kim, Hyun-Sik Kim, Yong-Suk Park |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2024-11-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/14/22/10322 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
ClusterE-ZSL: A Novel Cluster-Based Embedding for Enhanced Zero-Shot Learning in Contrastive Pre-Training Cross-Modal Retrieval
by: Umair Tariq, et al.
Published: (2024-01-01) -
CausMatch: Causal Matching Learning With Counterfactual Preference Framework for Cross-Modal Retrieval
by: Chen Chen, et al.
Published: (2025-01-01) -
Image–Text Matching Model Based on CLIP Bimodal Encoding
by: Yihuan Zhu, et al.
Published: (2024-11-01) -
Bridging the gap: multi-granularity representation learning for text-based vehicle retrieval
by: Xue Bo, et al.
Published: (2024-11-01) -
Cross-Modality Data Augmentation for Aerial Object Detection with Representation Learning
by: Chiheng Wei, et al.
Published: (2024-12-01)