Development of deep-learning-based autonomous agents for low-speed maneuvering in Unity
This study provides a systematic analysis of the resource-consuming training of deep reinforcement-learning (DRL) agents for simulated low-speed automated driving (AD). In Unity, this study established two case studies: garage parking and navigating an obstacle-dense area. Our analysis involves trai...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Tsinghua University Press
2024-09-01
|
Series: | Journal of Intelligent and Connected Vehicles |
Subjects: | |
Online Access: | https://www.sciopen.com/article/10.26599/JICV.2023.9210039 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1846150298318405632 |
---|---|
author | Riccardo Berta Luca Lazzaroni Alessio Capello Marianna Cossu Luca Forneris Alessandro Pighetti Francesco Bellotti |
author_facet | Riccardo Berta Luca Lazzaroni Alessio Capello Marianna Cossu Luca Forneris Alessandro Pighetti Francesco Bellotti |
author_sort | Riccardo Berta |
collection | DOAJ |
description | This study provides a systematic analysis of the resource-consuming training of deep reinforcement-learning (DRL) agents for simulated low-speed automated driving (AD). In Unity, this study established two case studies: garage parking and navigating an obstacle-dense area. Our analysis involves training a path-planning agent with real-time-only sensor information. This study addresses research questions insufficiently covered in the literature, exploring curriculum learning (CL), agent generalization (knowledge transfer), computation distribution (CPU vs. GPU), and mapless navigation. CL proved necessary for the garage scenario and beneficial for obstacle avoidance. It involved adjustments at different stages, including terminal conditions, environment complexity, and reward function hyperparameters, guided by their evolution in multiple training attempts. Fine-tuning the simulation tick and decision period parameters was crucial for effective training. The abstraction of high-level concepts (e.g., obstacle avoidance) necessitates training the agent in sufficiently complex environments in terms of the number of obstacles. While blogs and forums discuss training machine learning models in Unity, a lack of scientific articles on DRL agents for AD persists. However, since agent development requires considerable training time and difficult procedures, there is a growing need to support such research through scientific means. In addition to our findings, we contribute to the R&D community by providing our environment with open sources. |
format | Article |
id | doaj-art-256d948e213b4d94be3c15fae9c7f11e |
institution | Kabale University |
issn | 2399-9802 |
language | English |
publishDate | 2024-09-01 |
publisher | Tsinghua University Press |
record_format | Article |
series | Journal of Intelligent and Connected Vehicles |
spelling | doaj-art-256d948e213b4d94be3c15fae9c7f11e2024-11-29T02:09:19ZengTsinghua University PressJournal of Intelligent and Connected Vehicles2399-98022024-09-017322924410.26599/JICV.2023.9210039Development of deep-learning-based autonomous agents for low-speed maneuvering in UnityRiccardo Berta0Luca Lazzaroni1Alessio Capello2Marianna Cossu3Luca Forneris4Alessandro Pighetti5Francesco Bellotti6Electrical, Electronics and Telecommunication Engineering and Naval Architecture Department (DITEN), University of Genoa, Genoa 16145, ItalyElectrical, Electronics and Telecommunication Engineering and Naval Architecture Department (DITEN), University of Genoa, Genoa 16145, ItalyElectrical, Electronics and Telecommunication Engineering and Naval Architecture Department (DITEN), University of Genoa, Genoa 16145, ItalyElectrical, Electronics and Telecommunication Engineering and Naval Architecture Department (DITEN), University of Genoa, Genoa 16145, ItalyElectrical, Electronics and Telecommunication Engineering and Naval Architecture Department (DITEN), University of Genoa, Genoa 16145, ItalyElectrical, Electronics and Telecommunication Engineering and Naval Architecture Department (DITEN), University of Genoa, Genoa 16145, ItalyElectrical, Electronics and Telecommunication Engineering and Naval Architecture Department (DITEN), University of Genoa, Genoa 16145, ItalyThis study provides a systematic analysis of the resource-consuming training of deep reinforcement-learning (DRL) agents for simulated low-speed automated driving (AD). In Unity, this study established two case studies: garage parking and navigating an obstacle-dense area. Our analysis involves training a path-planning agent with real-time-only sensor information. This study addresses research questions insufficiently covered in the literature, exploring curriculum learning (CL), agent generalization (knowledge transfer), computation distribution (CPU vs. GPU), and mapless navigation. CL proved necessary for the garage scenario and beneficial for obstacle avoidance. It involved adjustments at different stages, including terminal conditions, environment complexity, and reward function hyperparameters, guided by their evolution in multiple training attempts. Fine-tuning the simulation tick and decision period parameters was crucial for effective training. The abstraction of high-level concepts (e.g., obstacle avoidance) necessitates training the agent in sufficiently complex environments in terms of the number of obstacles. While blogs and forums discuss training machine learning models in Unity, a lack of scientific articles on DRL agents for AD persists. However, since agent development requires considerable training time and difficult procedures, there is a growing need to support such research through scientific means. In addition to our findings, we contribute to the R&D community by providing our environment with open sources.https://www.sciopen.com/article/10.26599/JICV.2023.9210039automated drivingautonomous agentsdeep reinforcement learningcurriculum learningmodeling and simulation |
spellingShingle | Riccardo Berta Luca Lazzaroni Alessio Capello Marianna Cossu Luca Forneris Alessandro Pighetti Francesco Bellotti Development of deep-learning-based autonomous agents for low-speed maneuvering in Unity Journal of Intelligent and Connected Vehicles automated driving autonomous agents deep reinforcement learning curriculum learning modeling and simulation |
title | Development of deep-learning-based autonomous agents for low-speed maneuvering in Unity |
title_full | Development of deep-learning-based autonomous agents for low-speed maneuvering in Unity |
title_fullStr | Development of deep-learning-based autonomous agents for low-speed maneuvering in Unity |
title_full_unstemmed | Development of deep-learning-based autonomous agents for low-speed maneuvering in Unity |
title_short | Development of deep-learning-based autonomous agents for low-speed maneuvering in Unity |
title_sort | development of deep learning based autonomous agents for low speed maneuvering in unity |
topic | automated driving autonomous agents deep reinforcement learning curriculum learning modeling and simulation |
url | https://www.sciopen.com/article/10.26599/JICV.2023.9210039 |
work_keys_str_mv | AT riccardoberta developmentofdeeplearningbasedautonomousagentsforlowspeedmaneuveringinunity AT lucalazzaroni developmentofdeeplearningbasedautonomousagentsforlowspeedmaneuveringinunity AT alessiocapello developmentofdeeplearningbasedautonomousagentsforlowspeedmaneuveringinunity AT mariannacossu developmentofdeeplearningbasedautonomousagentsforlowspeedmaneuveringinunity AT lucaforneris developmentofdeeplearningbasedautonomousagentsforlowspeedmaneuveringinunity AT alessandropighetti developmentofdeeplearningbasedautonomousagentsforlowspeedmaneuveringinunity AT francescobellotti developmentofdeeplearningbasedautonomousagentsforlowspeedmaneuveringinunity |