Joint optimization method of intelligent service arrangement and computing-networking resource allocation for MEC

To solve the problems of low efficiency of network service caching and computing-networking resource allocation caused by tasks differentiation, highly dynamic network environment, and decentralized computing-networking resource deployment in edge networks, a decentralized service arrangement and co...

Full description

Saved in:
Bibliographic Details
Main Authors: Yun LI, Qian GAO, Zhixiu YAO, Shichao XIA, Jishen LIANG
Format: Article
Language:zho
Published: Editorial Department of Journal on Communications 2023-07-01
Series:Tongxin xuebao
Subjects:
Online Access:http://www.joconline.com.cn/zh/article/doi/10.11959/j.issn.1000-436x.2023125/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:To solve the problems of low efficiency of network service caching and computing-networking resource allocation caused by tasks differentiation, highly dynamic network environment, and decentralized computing-networking resource deployment in edge networks, a decentralized service arrangement and computing offloading model for mobile edge computing was investigated and established.Considering the multidimensional resource constraints, e.g., computing power, storage, and bandwidth, with the objective of minimizing task processing latency, the joint optimization of service caching and computing-networking resource allocation was abstracted as a partially observable Markov decision process.Considering the temporal dependency of service request and its coupling relationship with service caching, a long short-term memory network was introduced to capture time-related network state information.Then, based on recurrent multi-agent deep reinforcement learning, a distributed service arrangement and resource allocation algorithm was proposed to autonomously decide service caching and computing-networking resource allocation strategies.Simulation results demonstrate that significant performance improvements in terms of cache hit rate and task processing latency achieved by the proposed algorithm.
ISSN:1000-436X