Optimization method of task uninstallation in mobile edge computing environment combining improved deep Q-learning and transmission learning
Abstract Traditional task uninstallation methods are difficult to cope with the continuous changes in network environment and device status, therefore a more intelligent solution is urgently needed. This article studied the optimization method of task uninstallation in mobile edge computing (MEC) en...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2024-12-01
|
| Series: | Discover Applied Sciences |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s42452-024-06396-x |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1846112241495048192 |
|---|---|
| author | Lihong Zhao Shuqin Wang Xiaomei Ding |
| author_facet | Lihong Zhao Shuqin Wang Xiaomei Ding |
| author_sort | Lihong Zhao |
| collection | DOAJ |
| description | Abstract Traditional task uninstallation methods are difficult to cope with the continuous changes in network environment and device status, therefore a more intelligent solution is urgently needed. This article studied the optimization method of task uninstallation in mobile edge computing (MEC) environment, and studied the optimization method of task uninstallation in combination with improved deep Q-learning and transmission learning (TL). Firstly, it provided an overview of the basic concepts and optimization problems of task uninstallation in MEC environments, and then delved into the application of improved deep Q-learning in task uninstallation optimization, namely the application of deep Q-network (DQN) in task uninstallation optimization. Next, this article designed the TL-DQN method by combining DQN and TL. This method utilizes the DQN algorithm to construct a model that can handle complex states and action spaces. Meanwhile, by introducing TL, the model can quickly transfer knowledge and achieve adaptive optimization of tasks in new environments, thereby improving the generalization ability and efficiency of the decision-making process. Experiments have shown that TL-DQN performs better in convergence speed compared to other algorithms. For delay sensitive users, the delay growth of TL-DQN algorithm is relatively slow. When the task volume reached 10, the delay of TL-DQN was 7.97 s, which was 1.99 s lower than reinforcement learning. When the number of unloading tasks increased to 500, the energy consumption of TL-DQN was 554 J, which was 282 J lower than the energy consumption required by the task uninstallation method constructed by reinforcement learning algorithms. The task uninstallation method studied in this article has achieved significant results in reducing latency, improving energy efficiency, and adapting to network dynamic changes, bringing users a more stable and reliable service experience. |
| format | Article |
| id | doaj-art-1c438f5cee3044cba42d29e5f55dfc7a |
| institution | Kabale University |
| issn | 3004-9261 |
| language | English |
| publishDate | 2024-12-01 |
| publisher | Springer |
| record_format | Article |
| series | Discover Applied Sciences |
| spelling | doaj-art-1c438f5cee3044cba42d29e5f55dfc7a2024-12-22T12:41:09ZengSpringerDiscover Applied Sciences3004-92612024-12-017111610.1007/s42452-024-06396-xOptimization method of task uninstallation in mobile edge computing environment combining improved deep Q-learning and transmission learningLihong Zhao0Shuqin Wang1Xiaomei Ding2Anhui Wenda University of Information EngineeringAnhui Wenda University of Information EngineeringAnhui Wenda University of Information EngineeringAbstract Traditional task uninstallation methods are difficult to cope with the continuous changes in network environment and device status, therefore a more intelligent solution is urgently needed. This article studied the optimization method of task uninstallation in mobile edge computing (MEC) environment, and studied the optimization method of task uninstallation in combination with improved deep Q-learning and transmission learning (TL). Firstly, it provided an overview of the basic concepts and optimization problems of task uninstallation in MEC environments, and then delved into the application of improved deep Q-learning in task uninstallation optimization, namely the application of deep Q-network (DQN) in task uninstallation optimization. Next, this article designed the TL-DQN method by combining DQN and TL. This method utilizes the DQN algorithm to construct a model that can handle complex states and action spaces. Meanwhile, by introducing TL, the model can quickly transfer knowledge and achieve adaptive optimization of tasks in new environments, thereby improving the generalization ability and efficiency of the decision-making process. Experiments have shown that TL-DQN performs better in convergence speed compared to other algorithms. For delay sensitive users, the delay growth of TL-DQN algorithm is relatively slow. When the task volume reached 10, the delay of TL-DQN was 7.97 s, which was 1.99 s lower than reinforcement learning. When the number of unloading tasks increased to 500, the energy consumption of TL-DQN was 554 J, which was 282 J lower than the energy consumption required by the task uninstallation method constructed by reinforcement learning algorithms. The task uninstallation method studied in this article has achieved significant results in reducing latency, improving energy efficiency, and adapting to network dynamic changes, bringing users a more stable and reliable service experience.https://doi.org/10.1007/s42452-024-06396-xDeep Q-networkTransmission learningMobile edge computingTask uninstallationReinforcement learning |
| spellingShingle | Lihong Zhao Shuqin Wang Xiaomei Ding Optimization method of task uninstallation in mobile edge computing environment combining improved deep Q-learning and transmission learning Discover Applied Sciences Deep Q-network Transmission learning Mobile edge computing Task uninstallation Reinforcement learning |
| title | Optimization method of task uninstallation in mobile edge computing environment combining improved deep Q-learning and transmission learning |
| title_full | Optimization method of task uninstallation in mobile edge computing environment combining improved deep Q-learning and transmission learning |
| title_fullStr | Optimization method of task uninstallation in mobile edge computing environment combining improved deep Q-learning and transmission learning |
| title_full_unstemmed | Optimization method of task uninstallation in mobile edge computing environment combining improved deep Q-learning and transmission learning |
| title_short | Optimization method of task uninstallation in mobile edge computing environment combining improved deep Q-learning and transmission learning |
| title_sort | optimization method of task uninstallation in mobile edge computing environment combining improved deep q learning and transmission learning |
| topic | Deep Q-network Transmission learning Mobile edge computing Task uninstallation Reinforcement learning |
| url | https://doi.org/10.1007/s42452-024-06396-x |
| work_keys_str_mv | AT lihongzhao optimizationmethodoftaskuninstallationinmobileedgecomputingenvironmentcombiningimproveddeepqlearningandtransmissionlearning AT shuqinwang optimizationmethodoftaskuninstallationinmobileedgecomputingenvironmentcombiningimproveddeepqlearningandtransmissionlearning AT xiaomeiding optimizationmethodoftaskuninstallationinmobileedgecomputingenvironmentcombiningimproveddeepqlearningandtransmissionlearning |