A spiking neural network for active efficient coding
Biological vision systems simultaneously learn to efficiently encode their visual inputs and to control the movements of their eyes based on the visual input they sample. This autonomous joint learning of visual representations and actions has previously been modeled in the Active Efficient Coding (...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2025-01-01
|
Series: | Frontiers in Robotics and AI |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/frobt.2024.1435197/full |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841527557692850176 |
---|---|
author | Thomas Barbier Céline Teulière Jochen Triesch |
author_facet | Thomas Barbier Céline Teulière Jochen Triesch |
author_sort | Thomas Barbier |
collection | DOAJ |
description | Biological vision systems simultaneously learn to efficiently encode their visual inputs and to control the movements of their eyes based on the visual input they sample. This autonomous joint learning of visual representations and actions has previously been modeled in the Active Efficient Coding (AEC) framework and implemented using traditional frame-based cameras. However, modern event-based cameras are inspired by the retina and offer advantages in terms of acquisition rate, dynamic range, and power consumption. Here, we propose a first AEC system that is fully implemented as a Spiking Neural Network (SNN) driven by inputs from an event-based camera. This input is efficiently encoded by a two-layer SNN, which in turn feeds into a spiking reinforcement learner that learns motor commands to maximize an intrinsic reward signal. This reward signal is computed directly from the activity levels of the first two layers. We test our approach on two different behaviors: visual tracking of a translating target and stabilizing the orientation of a rotating target. To the best of our knowledge, our work represents the first ever fully spiking AEC model. |
format | Article |
id | doaj-art-1cf5537a5e9b4f5292c4b340e62deb0c |
institution | Kabale University |
issn | 2296-9144 |
language | English |
publishDate | 2025-01-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Robotics and AI |
spelling | doaj-art-1cf5537a5e9b4f5292c4b340e62deb0c2025-01-15T13:05:56ZengFrontiers Media S.A.Frontiers in Robotics and AI2296-91442025-01-011110.3389/frobt.2024.14351971435197A spiking neural network for active efficient codingThomas Barbier0Céline Teulière1Jochen Triesch2SIGMA Clermont, Centre National de la Recherche Scientifique, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, FranceSIGMA Clermont, Centre National de la Recherche Scientifique, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, FranceLife- and Neurosciences, Frankfurt Institute for Advanced Studies, Frankfurt am Main, GermanyBiological vision systems simultaneously learn to efficiently encode their visual inputs and to control the movements of their eyes based on the visual input they sample. This autonomous joint learning of visual representations and actions has previously been modeled in the Active Efficient Coding (AEC) framework and implemented using traditional frame-based cameras. However, modern event-based cameras are inspired by the retina and offer advantages in terms of acquisition rate, dynamic range, and power consumption. Here, we propose a first AEC system that is fully implemented as a Spiking Neural Network (SNN) driven by inputs from an event-based camera. This input is efficiently encoded by a two-layer SNN, which in turn feeds into a spiking reinforcement learner that learns motor commands to maximize an intrinsic reward signal. This reward signal is computed directly from the activity levels of the first two layers. We test our approach on two different behaviors: visual tracking of a translating target and stabilizing the orientation of a rotating target. To the best of our knowledge, our work represents the first ever fully spiking AEC model.https://www.frontiersin.org/articles/10.3389/frobt.2024.1435197/fullactive efficient codingspiking neural networkevent-based camerasunsupervised learningreinforcement learning |
spellingShingle | Thomas Barbier Céline Teulière Jochen Triesch A spiking neural network for active efficient coding Frontiers in Robotics and AI active efficient coding spiking neural network event-based cameras unsupervised learning reinforcement learning |
title | A spiking neural network for active efficient coding |
title_full | A spiking neural network for active efficient coding |
title_fullStr | A spiking neural network for active efficient coding |
title_full_unstemmed | A spiking neural network for active efficient coding |
title_short | A spiking neural network for active efficient coding |
title_sort | spiking neural network for active efficient coding |
topic | active efficient coding spiking neural network event-based cameras unsupervised learning reinforcement learning |
url | https://www.frontiersin.org/articles/10.3389/frobt.2024.1435197/full |
work_keys_str_mv | AT thomasbarbier aspikingneuralnetworkforactiveefficientcoding AT celineteuliere aspikingneuralnetworkforactiveefficientcoding AT jochentriesch aspikingneuralnetworkforactiveefficientcoding AT thomasbarbier spikingneuralnetworkforactiveefficientcoding AT celineteuliere spikingneuralnetworkforactiveefficientcoding AT jochentriesch spikingneuralnetworkforactiveefficientcoding |