A Multi-Robot Collaborative Exploration Method Based on Deep Reinforcement Learning and Knowledge Distillation
Multi-robot collaborative autonomous exploration in communication-constrained scenarios is essential in areas such as search and rescue. During the exploration process, the robot teams must minimize the occurrence of redundant scanning of the environment. To this end, we propose to view the robot te...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Mathematics |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7390/13/1/173 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Multi-robot collaborative autonomous exploration in communication-constrained scenarios is essential in areas such as search and rescue. During the exploration process, the robot teams must minimize the occurrence of redundant scanning of the environment. To this end, we propose to view the robot team as an agent and obtain a policy network that can be centrally executed by training with an improved SAC deep reinforcement learning algorithm. In addition, we transform the obtained policy network into distributed networks that can be adapted to communication-constrained scenarios using knowledge distillation. Our proposed method offers an innovative solution to the decision-making problem for multiple robots. We conducted experiments on our proposed method within simulated environments. The experimental results show the adaptability of our proposed method to various sizes of environments and its superior performance compared to the current mainstream methods. |
---|---|
ISSN: | 2227-7390 |