Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA

Convolutional neural networks (CNNs) are essential for image classification and detection, and their implementation in embedded systems is becoming increasingly attractive due to their compact size and low power consumption. Field-Programmable Gate Arrays (FPGAs) have  emerged as a promising option,...

Full description

Saved in:
Bibliographic Details
Main Authors: Nicolás Urbano Pintos, Héctor Lacomi, Mario Lavorato
Format: Article
Language:English
Published: Universidad de Buenos Aires 2024-12-01
Series:Revista Elektrón
Subjects:
Online Access:http://elektron.fi.uba.ar/index.php/elektron/article/view/200
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1846121645822967808
author Nicolás Urbano Pintos
Héctor Lacomi
Mario Lavorato
author_facet Nicolás Urbano Pintos
Héctor Lacomi
Mario Lavorato
author_sort Nicolás Urbano Pintos
collection DOAJ
description Convolutional neural networks (CNNs) are essential for image classification and detection, and their implementation in embedded systems is becoming increasingly attractive due to their compact size and low power consumption. Field-Programmable Gate Arrays (FPGAs) have  emerged as a promising option, thanks to their low latency and high energy efficiency. Vitis AI and FINN are two development environments that automate the implementation of CNNs on FPGAs. Vitis AI uses a deep learning processing unit (DPU) and memory accelerators, while FINN is based on a streaming architecture and fine-tunes parallelization. Both environments implement  parameter quantization techniques to reduce memory usage. This work extends previous comparisons by evaluating both environments by implementing four models with different numbers of layers on the Xilinx Kria KV260 FPGA platform.  The complete process from training to evaluation on FPGA, including quantization and hardware implementation, is described in  detail. The results show that FINN provides lower latency, higher throughput, and better energy efficiency than Vitis AI. However, Vitis AI stands out for its simplicity in model training and ease of implementation on FPGA. The main finding of this study is that as the complexity of the models increases (with more layers in the neural networks), the differences in terms of performance and energy efficiency between FINN and Vitis AI are significantly reduced.
format Article
id doaj-art-5b8ffecde55d49d9aebd2ece47d5fe7d
institution Kabale University
issn 2525-0159
language English
publishDate 2024-12-01
publisher Universidad de Buenos Aires
record_format Article
series Revista Elektrón
spelling doaj-art-5b8ffecde55d49d9aebd2ece47d5fe7d2024-12-15T19:19:10ZengUniversidad de Buenos AiresRevista Elektrón2525-01592024-12-0182617010.37537/rev.elektron.8.2.200.2024123Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGANicolás Urbano Pintos0Héctor LacomiMario LavoratoUniversidad Tecnológica Nacional - Facultad Regional Haedo CITEDEFConvolutional neural networks (CNNs) are essential for image classification and detection, and their implementation in embedded systems is becoming increasingly attractive due to their compact size and low power consumption. Field-Programmable Gate Arrays (FPGAs) have  emerged as a promising option, thanks to their low latency and high energy efficiency. Vitis AI and FINN are two development environments that automate the implementation of CNNs on FPGAs. Vitis AI uses a deep learning processing unit (DPU) and memory accelerators, while FINN is based on a streaming architecture and fine-tunes parallelization. Both environments implement  parameter quantization techniques to reduce memory usage. This work extends previous comparisons by evaluating both environments by implementing four models with different numbers of layers on the Xilinx Kria KV260 FPGA platform.  The complete process from training to evaluation on FPGA, including quantization and hardware implementation, is described in  detail. The results show that FINN provides lower latency, higher throughput, and better energy efficiency than Vitis AI. However, Vitis AI stands out for its simplicity in model training and ease of implementation on FPGA. The main finding of this study is that as the complexity of the models increases (with more layers in the neural networks), the differences in terms of performance and energy efficiency between FINN and Vitis AI are significantly reduced.http://elektron.fi.uba.ar/index.php/elektron/article/view/200fpgacnnfinnvitis-aicuantización
spellingShingle Nicolás Urbano Pintos
Héctor Lacomi
Mario Lavorato
Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA
Revista Elektrón
fpga
cnn
finn
vitis-ai
cuantización
title Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA
title_full Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA
title_fullStr Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA
title_full_unstemmed Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA
title_short Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA
title_sort comparison of vitis ai and finn for implementing convolutional neural networks on fpga
topic fpga
cnn
finn
vitis-ai
cuantización
url http://elektron.fi.uba.ar/index.php/elektron/article/view/200
work_keys_str_mv AT nicolasurbanopintos comparisonofvitisaiandfinnforimplementingconvolutionalneuralnetworksonfpga
AT hectorlacomi comparisonofvitisaiandfinnforimplementingconvolutionalneuralnetworksonfpga
AT mariolavorato comparisonofvitisaiandfinnforimplementingconvolutionalneuralnetworksonfpga