Efficient Task Scheduling and Load Balancing in Fog Computing for Crucial Healthcare Through Deep Reinforcement Learning

In healthcare, real-time decision making is crucial for ensuring timely and accurate patient care. However, traditional computing infrastructures, with their wide ranging capabilities, suffer from inherent latency, which compromises the efficiency of time-sensitive medical applications. This paper e...

Full description

Saved in:
Bibliographic Details
Main Authors: Prashanth Choppara, Bommareddy Lokesh
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10876121/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In healthcare, real-time decision making is crucial for ensuring timely and accurate patient care. However, traditional computing infrastructures, with their wide ranging capabilities, suffer from inherent latency, which compromises the efficiency of time-sensitive medical applications. This paper explores the potential of fog computing to better address this challenge, proposing a new framework that uses deep reinforcement learning (DRL) to advance task scheduling in crucial healthcare. The paper addresses the limitations of cloud computing systems. It proposes and replaces a fog computing architecture in supporting low latency for healthcare applications. This architecture reduces transmission latency by placing processing nodes close to the source of data generation, namely IoT-enabled healthcare devices. The foundation of this approach is the DRL model, which is designed to dynamically optimize the partition of computational tasks across fog nodes to improve both data throughput and operational response times. The effectiveness of the proposed DRL based fog computing model is validated with a series of simulations performed with the SimPy simulation environment. In such simulations, diverse healthcare scenarios, ranging from continuous patient monitoring systems to crucial emergency response applications, are recreated, providing a rich framework for testing the real-time processing capabilities of the model. This algorithm, DRL, has been fine-tuned and extensively implemented in these scenarios to show how the algorithm controls and optimizes tasks and their urgency in accordance with resource demand. By dynamically learning from real-time system states and optimizing task allocation to minimize delays, the DRL model reduces the makespan by up to 30% compared to traditional scheduling approaches. Comparative performance analysis indicated a 30% reduction in task completion times, a 40% reduction in operational latency, and a 25% improvement in fault tolerance relative to traditional scheduling approaches. The flexibility of the DRL model is further considered through its application to diverse real-time data processing contexts in industrial automation and smart traffic systems.
ISSN:2169-3536