MDCKE: Multimodal deep-context knowledge extractor that integrates contextual information
Extraction of comprehensive information from diverse data sources remains a significant challenge in contemporary research. Although multimodal Named Entity Recognition (NER) and Relation Extraction (RE) tasks have garnered significant attention, existing methods often focus on surface-level informa...
Saved in:
Main Authors: | Hyojin Ko, Joon Yoo, Ok-Ran Jeong |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-04-01
|
Series: | Alexandria Engineering Journal |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S1110016825001474 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Object detection and multimodal learning for product recommendations
by: Karolina Selwon, et al.
Published: (2025-01-01) -
Enhancing foundation models for scientific discovery via multimodal knowledge graph representations
by: Vanessa Lopez, et al.
Published: (2025-01-01) -
TMFN: a text-based multimodal fusion network with multi-scale feature extraction and unsupervised contrastive learning for multimodal sentiment analysis
by: Junsong Fu, et al.
Published: (2025-01-01) -
Using Information Extraction to Normalize the Training Data for Automatic Radiology Report Generation
by: Yuxiang Liao, et al.
Published: (2024-01-01) -
Precise Recognition and Feature Depth Analysis of Tennis Training Actions Based on Multimodal Data Integration and Key Action Classification
by: Weichao Yang
Published: (2025-01-01)