First-of-its-kind AI model for bioacoustic detection using a lightweight associative memory Hopfield neural network

A growing issue within conservation bioacoustics is the laborious task of analysing the vast amount of data generated from the use of passive acoustic monitoring devices. In this paper, we present an alternative AI model which has the potential to help alleviate this problem. Our model formulation a...

Full description

Saved in:
Bibliographic Details
Main Authors: Andrew Gascoyne, Wendy Lomas
Format: Article
Language:English
Published: Elsevier 2025-11-01
Series:Ecological Informatics
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S1574954125003917
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A growing issue within conservation bioacoustics is the laborious task of analysing the vast amount of data generated from the use of passive acoustic monitoring devices. In this paper, we present an alternative AI model which has the potential to help alleviate this problem. Our model formulation addresses the key issues encountered when using current AI models for bioacoustic analysis, namely: the limited training data available; the environmental impact, particularly in energy consumption and carbon footprint of training and implementing these models; and the associated hardware requirements. The model developed in this work uses associative memory via a transparent and explainable Hopfield neural network to store signals and detect similar signals which can then be used to classify species. Training is rapid (3milliseconds), as only one representative signal is required for each target sound within a dataset. The model is fast, taking only 5.4seconds to pre-process and classify all 10384 publicly available bat recordings, on a standard Apple MacBook Air. The model is also lightweight, i.e., has a small memory footprint of 144.09MB of RAM usage. Hence, the low computational demands make the model ideal for use on a variety of standard personal devices with potential for deployment in the field via edge-processing devices. It is also competitively accurate, with up to 86% precision on the labelled dataset used to evaluate the model. In fact, we could not find a single case of disagreement between model and manual identification via expert field guides. Although a dataset of bat echolocation calls was chosen to demonstrate this first-of-its-kind AI model, trained on only two representative echolocation calls, the model is not species specific. In conclusion, we propose an equitable AI model that has the potential to be a game changer for fast, lightweight, sustainable, transparent, explainable and accurate bioacoustic analysis.
ISSN:1574-9541