InputJump: Augmented reality-facilitated cross-device input fusion based on spatial and semantic information

The proliferation of computing devices requires seamless cross-device interactions. Augmented reality (AR) headsets can facilitate interactions with existing computers owing to their user-centered views and natural inputs. In this study, we propose InputJump, a user-centered cross-device input fusio...

Full description

Saved in:
Bibliographic Details
Main Authors: Xin Zeng, Xiaoyu Wang, Tengxiang Zhang, Yukang Yan, Yiqiang Chen
Format: Article
Language:English
Published: KeAi Communications Co., Ltd. 2024-12-01
Series:Virtual Reality & Intelligent Hardware
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2096579624000639
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The proliferation of computing devices requires seamless cross-device interactions. Augmented reality (AR) headsets can facilitate interactions with existing computers owing to their user-centered views and natural inputs. In this study, we propose InputJump, a user-centered cross-device input fusion method that maps multi-modal cross-device inputs to interactive elements on graphical interfaces. The input jump calculates the spatial coordinates of the input target positions and the interactive elements within the coordinate system of the AR headset. It also extracts semantic descriptions of inputs and elements using large language models (LLMs). Two types of information from different inputs (e.g., gaze, gesture, mouse, and keyboard) were fused to map onto an interactive element. The proposed method is explained in detail and implemented on both an AR headset and a desktop PC. We then conducted a user study and extensive simulations to validate our proposed method. The results showed that InputJump can accurately associate a fused input with the target interactive element, enabling a more natural and flexible interaction experience.
ISSN:2096-5796