Multimodal Semantics Extraction from User-Generated Videos

User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and liv...

Full description

Saved in:
Bibliographic Details
Main Authors: Francesco Cricri, Kostadin Dabov, Mikko J. Roininen, Sujeet Mate, Igor D. D. Curcio, Moncef Gabbouj
Format: Article
Language:English
Published: Wiley 2012-01-01
Series:Advances in Multimedia
Online Access:http://dx.doi.org/10.1155/2012/292064
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and live music events) being recorded in these videos. One of the key contributions of this work is a joint utilization of different data modalities, including such captured by auxiliary sensors during the video recording performed by each user. In particular, we analyze GPS data, magnetometer data, accelerometer data, video- and audio-content data. We use these data modalities to infer information about the event being recorded, in terms of layout (e.g., stadium), genre, indoor versus outdoor scene, and the main area of interest of the event. Furthermore we propose a method that automatically identifies the optimal set of cameras to be used in a multicamera video production. Finally, we detect the camera users which fall within the field of view of other cameras recording at the same public happening. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real sport events and live music performances.
ISSN:1687-5680
1687-5699