FPGA-Based Deep Neural Network Implementation for Handwritten Digit Recognition

This paper presents a field programmable gate array (FPGA)–based implementation of a deep neural network (DNN) for handwritten digit recognition. We propose the use of a fully connected four-layer neural network with the hidden layers implementing the ReLU activation function and the output layer ba...

Full description

Saved in:
Bibliographic Details
Main Authors: Matej Štajnbrikner, Igor Valek, Tomislav Matić, Mario Vranješ
Format: Article
Language:English
Published: Wiley 2025-01-01
Series:Advances in Multimedia
Online Access:http://dx.doi.org/10.1155/am/8901861
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper presents a field programmable gate array (FPGA)–based implementation of a deep neural network (DNN) for handwritten digit recognition. We propose the use of a fully connected four-layer neural network with the hidden layers implementing the ReLU activation function and the output layer based on the Softmax activation function. The neural network model, including the forward propagation algorithm and the backward propagation (BP) algorithm, is entirely implemented in an FPGA system, enabling both testing and training of the network. Mathematical operations in the network are performed on 32-bit floating-point data. The proposed network achieves promising results in terms of system simplicity and low consumption of hardware resources while maintaining acceptable recognition accuracy and algorithm execution speed at the same time. The system is implemented on Xilinx’s ZYBO development board and achieves a precision of 92.8% digit recognition. Training and test images for system evaluation are obtained from the MNIST database. The system occupies 10,020 (57%) lookup tables, 7781 (22%) registers, 17 (21%) DSP blocks, and 3 (5%) BRAM blocks. The maximum power consumption of the system is equal to 1.602 W, at the operating frequency of the system equal to 100 MHz.
ISSN:1687-5699