FPGA SoC Implementation of Adaptive Deep Neural Network-Based Multimodal Edge Intelligence for Internet of Medical Things

In emergency healthcare services, accurate and timely decision-making is critical for the patient’s life and death. The emergence of edge intelligence enables these service goals achievable for Internet of Medical Things (IoMT) compared with cloud-centric approaches. To assist medical per...

Full description

Saved in:
Bibliographic Details
Main Authors: Nikhil B. Gaikwad, Smith K. Khare, Dinesh Mendhe, Hasan Mir, Sokol Kosta, U. Rajendra Acharya
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11096547/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In emergency healthcare services, accurate and timely decision-making is critical for the patient’s life and death. The emergence of edge intelligence enables these service goals achievable for Internet of Medical Things (IoMT) compared with cloud-centric approaches. To assist medical personnel in intensive care units (ICU), we present the design of a network edge gateway that performs resource-efficient, real-time data analytics. We develop a cloud-configurable deep neural network (DNN) intellectual property (IP) core with an adaptable hardware architecture that executes four different types of analysis on an edge gateway. Our developed IP core adaptively switches from one architecture to another only in one clock cycle, based on the type of input features. The proposed IP core analyzes raw multimodal signals such as ECG, PPG, accelerometer, and other to discover anomalies in critically ill patients and their surroundings. We have validated the robustness of our developed model by comparing it with benchmark machine learning models and their previous implementations. The results show that our adaptive DNN model has obtained a software accuracy of 99.2% for ECG, 91.4% for PPG, 95% for activity classification, and 98.7% for smoke detection with a five-fold cross-validation strategy. Three versions of adaptive DNN IP cores (8-bit, 16-bit, 24-bit) are implemented on SoC/FPGA and compared together to study the effect of bit precision on accuracy, resource utilization, and power consumption. The developed adaptive DNN IP cores with 16-bits require 680 nanoseconds with a power consumption of 309 milliwatts for a single inference with a speed of 1.47 mega samples per second. Our analysis shows that the decentralization of intelligence in the IP core reduces data size from 96.25% to 98.75%. This flexible IP core has achieved significant power and resource utilization performance compared to independent implementation without compromising latency and throughput.
ISSN:2169-3536