Distributed representations enable robust multi-timescale symbolic computation in neuromorphic hardware
Programming recurrent spiking neural networks (RSNNs) to robustly perform multi-timescale computation remains a difficult challenge. To address this, we describe a single-shot weight learning scheme to embed robust multi-timescale dynamics into attractor-based RSNNs, by exploiting the properties of...
Saved in:
Main Authors: | Madison Cotteret, Hugh Greatorex, Alpha Renner, Junren Chen, Emre Neftci, Huaqiang Wu, Giacomo Indiveri, Martin Ziegler, Elisabetta Chicca |
---|---|
Format: | Article |
Language: | English |
Published: |
IOP Publishing
2025-01-01
|
Series: | Neuromorphic Computing and Engineering |
Subjects: | |
Online Access: | https://doi.org/10.1088/2634-4386/ada851 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Key point of NFV hardware resource pool planning and construction
by: Lei SHEN, et al.
Published: (2018-06-01) -
Hardware reconfigurable coding and evolution algorithm based on evolvable hardware
by: Ting WANG, et al.
Published: (2012-08-01) -
Efficient Hardware Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA
by: Ali Mehrabi, et al.
Published: (2024-01-01) -
Approximate CNN Hardware Accelerators for Resource Constrained Devices
by: P Thejaswini, et al.
Published: (2025-01-01) -
Research on range matching for wire-speed hardware NIDS
by: CHEN Shu-hui, et al.
Published: (2006-01-01)