Combining Unity with machine vision to create low latency, flexible and simple virtual realities

Abstract In recent years, virtual reality arenas have become increasingly popular for quantifying visual behaviours. By using the actions of a constrained animal to control the visual scenery, the animal perceives that it is moving through a virtual world. Importantly, as the animal is constrained i...

Full description

Saved in:
Bibliographic Details
Main Authors: Yuri Ogawa, Raymond Aoukar, Richard Leibbrandt, Jake S. Manger, Zahra M. Bagheri, Luke Turnbull, Chris Johnston, Pavan K. Kaushik, Jaxon Mitchell, Jan M. Hemmi, Karin Nordström
Format: Article
Language:English
Published: Wiley 2025-01-01
Series:Methods in Ecology and Evolution
Subjects:
Online Access:https://doi.org/10.1111/2041-210X.14449
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841555294585356288
author Yuri Ogawa
Raymond Aoukar
Richard Leibbrandt
Jake S. Manger
Zahra M. Bagheri
Luke Turnbull
Chris Johnston
Pavan K. Kaushik
Jaxon Mitchell
Jan M. Hemmi
Karin Nordström
author_facet Yuri Ogawa
Raymond Aoukar
Richard Leibbrandt
Jake S. Manger
Zahra M. Bagheri
Luke Turnbull
Chris Johnston
Pavan K. Kaushik
Jaxon Mitchell
Jan M. Hemmi
Karin Nordström
author_sort Yuri Ogawa
collection DOAJ
description Abstract In recent years, virtual reality arenas have become increasingly popular for quantifying visual behaviours. By using the actions of a constrained animal to control the visual scenery, the animal perceives that it is moving through a virtual world. Importantly, as the animal is constrained in space, behavioural quantification is facilitated. Furthermore, using computer‐generated visual scenery allows for identification of visual triggers of behaviour. We created a novel virtual reality arena combining machine vision with the gaming engine Unity. For tethered flight, we enhanced an existing multi‐modal virtual reality arena, MultiMoVR, but tracked wing movements using DeepLabCut‐live (DLC‐live). For tethered walking animals, we used FicTrac to track the motion of a trackball. In both cases, real‐time tracking was interfaced with Unity to control the location and rotation of the tethered animal's avatar in the virtual world. We developed a user‐friendly Unity Editor interface, CAVE, to simplify experimental design and data storage without the need for coding. We show that both the DLC‐live‐Unity and the FicTrac‐Unity configurations close the feedback loop effectively and quickly. We show that closed‐loop feedback reduces behavioural artefacts exhibited by walking crabs in open‐loop scenarios, and that flying Eristalis tenax hoverflies navigate towards virtual flowers in closed loop. We show examples of how the CAVE interface can enable experimental sequencing control including use of avatar proximity to virtual objects of interest. Our results show that combining Unity with machine vision tools provides an easy and flexible virtual reality environment that can be readily adjusted to new experiments and species. This can be implemented programmatically in Unity, or by using our new tool CAVE, which allows users to design new experiments without additional programming. We provide resources for replicating experiments and our interface CAVE via GitHub, together with user manuals and instruction videos, for sharing with the wider scientific community.
format Article
id doaj-art-1b2904bb1f2448409e2f4daebca83f6c
institution Kabale University
issn 2041-210X
language English
publishDate 2025-01-01
publisher Wiley
record_format Article
series Methods in Ecology and Evolution
spelling doaj-art-1b2904bb1f2448409e2f4daebca83f6c2025-01-08T05:44:10ZengWileyMethods in Ecology and Evolution2041-210X2025-01-0116112614410.1111/2041-210X.14449Combining Unity with machine vision to create low latency, flexible and simple virtual realitiesYuri Ogawa0Raymond Aoukar1Richard Leibbrandt2Jake S. Manger3Zahra M. Bagheri4Luke Turnbull5Chris Johnston6Pavan K. Kaushik7Jaxon Mitchell8Jan M. Hemmi9Karin Nordström10Flinders Health and Medical Research Institute, Flinders University Adelaide South Australia AustraliaFlinders Health and Medical Research Institute, Flinders University Adelaide South Australia AustraliaCollege of Science and Engineering, Flinders University Adelaide South Australia AustraliaSchool of Biological Sciences and UWA Oceans Institute University of Western Australia Crawley Western Australia AustraliaSchool of Biological Sciences and UWA Oceans Institute University of Western Australia Crawley Western Australia AustraliaFlinders Health and Medical Research Institute, Flinders University Adelaide South Australia AustraliaFlinders Health and Medical Research Institute, Flinders University Adelaide South Australia AustraliaDepartment of Collective Behavior Max Planck Institute of Animal Behavior Konstanz GermanyFlinders Health and Medical Research Institute, Flinders University Adelaide South Australia AustraliaSchool of Biological Sciences and UWA Oceans Institute University of Western Australia Crawley Western Australia AustraliaFlinders Health and Medical Research Institute, Flinders University Adelaide South Australia AustraliaAbstract In recent years, virtual reality arenas have become increasingly popular for quantifying visual behaviours. By using the actions of a constrained animal to control the visual scenery, the animal perceives that it is moving through a virtual world. Importantly, as the animal is constrained in space, behavioural quantification is facilitated. Furthermore, using computer‐generated visual scenery allows for identification of visual triggers of behaviour. We created a novel virtual reality arena combining machine vision with the gaming engine Unity. For tethered flight, we enhanced an existing multi‐modal virtual reality arena, MultiMoVR, but tracked wing movements using DeepLabCut‐live (DLC‐live). For tethered walking animals, we used FicTrac to track the motion of a trackball. In both cases, real‐time tracking was interfaced with Unity to control the location and rotation of the tethered animal's avatar in the virtual world. We developed a user‐friendly Unity Editor interface, CAVE, to simplify experimental design and data storage without the need for coding. We show that both the DLC‐live‐Unity and the FicTrac‐Unity configurations close the feedback loop effectively and quickly. We show that closed‐loop feedback reduces behavioural artefacts exhibited by walking crabs in open‐loop scenarios, and that flying Eristalis tenax hoverflies navigate towards virtual flowers in closed loop. We show examples of how the CAVE interface can enable experimental sequencing control including use of avatar proximity to virtual objects of interest. Our results show that combining Unity with machine vision tools provides an easy and flexible virtual reality environment that can be readily adjusted to new experiments and species. This can be implemented programmatically in Unity, or by using our new tool CAVE, which allows users to design new experiments without additional programming. We provide resources for replicating experiments and our interface CAVE via GitHub, together with user manuals and instruction videos, for sharing with the wider scientific community.https://doi.org/10.1111/2041-210X.14449arthropod visionclosed loopgainmotion visionnaturalistic stimulinavigation
spellingShingle Yuri Ogawa
Raymond Aoukar
Richard Leibbrandt
Jake S. Manger
Zahra M. Bagheri
Luke Turnbull
Chris Johnston
Pavan K. Kaushik
Jaxon Mitchell
Jan M. Hemmi
Karin Nordström
Combining Unity with machine vision to create low latency, flexible and simple virtual realities
Methods in Ecology and Evolution
arthropod vision
closed loop
gain
motion vision
naturalistic stimuli
navigation
title Combining Unity with machine vision to create low latency, flexible and simple virtual realities
title_full Combining Unity with machine vision to create low latency, flexible and simple virtual realities
title_fullStr Combining Unity with machine vision to create low latency, flexible and simple virtual realities
title_full_unstemmed Combining Unity with machine vision to create low latency, flexible and simple virtual realities
title_short Combining Unity with machine vision to create low latency, flexible and simple virtual realities
title_sort combining unity with machine vision to create low latency flexible and simple virtual realities
topic arthropod vision
closed loop
gain
motion vision
naturalistic stimuli
navigation
url https://doi.org/10.1111/2041-210X.14449
work_keys_str_mv AT yuriogawa combiningunitywithmachinevisiontocreatelowlatencyflexibleandsimplevirtualrealities
AT raymondaoukar combiningunitywithmachinevisiontocreatelowlatencyflexibleandsimplevirtualrealities
AT richardleibbrandt combiningunitywithmachinevisiontocreatelowlatencyflexibleandsimplevirtualrealities
AT jakesmanger combiningunitywithmachinevisiontocreatelowlatencyflexibleandsimplevirtualrealities
AT zahrambagheri combiningunitywithmachinevisiontocreatelowlatencyflexibleandsimplevirtualrealities
AT luketurnbull combiningunitywithmachinevisiontocreatelowlatencyflexibleandsimplevirtualrealities
AT chrisjohnston combiningunitywithmachinevisiontocreatelowlatencyflexibleandsimplevirtualrealities
AT pavankkaushik combiningunitywithmachinevisiontocreatelowlatencyflexibleandsimplevirtualrealities
AT jaxonmitchell combiningunitywithmachinevisiontocreatelowlatencyflexibleandsimplevirtualrealities
AT janmhemmi combiningunitywithmachinevisiontocreatelowlatencyflexibleandsimplevirtualrealities
AT karinnordstrom combiningunitywithmachinevisiontocreatelowlatencyflexibleandsimplevirtualrealities