A recurrent sigma pi sigma neural network

Abstract In this paper, a novel recurrent sigma‒sigma neural network (RSPSNN) that contains the same advantages as the higher-order and recurrent neural networks is proposed. The batch gradient algorithm is used to train the RSPSNN to search for the optimal weights based on the minimal mean squared...

Full description

Saved in:
Bibliographic Details
Main Authors: Fei Deng, Shibin Liang, Kaiguo Qian, Jing Yu, Xuanxuan Li
Format: Article
Language:English
Published: Nature Portfolio 2025-01-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-024-84299-y
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract In this paper, a novel recurrent sigma‒sigma neural network (RSPSNN) that contains the same advantages as the higher-order and recurrent neural networks is proposed. The batch gradient algorithm is used to train the RSPSNN to search for the optimal weights based on the minimal mean squared error (MSE). To substantiate the unique equilibrium state of the RSPSNN, the characteristic of stability convergence is proven, which is one of the most significant indices for reflecting the effectiveness and overcoming the instability problem in the training of this network. Finally, to establish a more precise evaluation of its validity, five empirical experiments are used. The RSPSNN is successfully applied to the function approximation problem, prediction problem, parity problem, classification problem, and image simulation, which verifies its effectiveness and practicability.
ISSN:2045-2322