Knowledge Distillation-Enhanced Behavior Transformer for Decision-Making of Autonomous Driving
Autonomous driving has demonstrated impressive driving capabilities, with behavior decision-making playing a crucial role as a bridge between perception and control. Imitation Learning (IL) and Reinforcement Learning (RL) have introduced innovative approaches to behavior decision-making in autonomou...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/25/1/191 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841548950729916416 |
---|---|
author | Rui Zhao Yuze Fan Yun Li Dong Zhang Fei Gao Zhenhai Gao Zhengcai Yang |
author_facet | Rui Zhao Yuze Fan Yun Li Dong Zhang Fei Gao Zhenhai Gao Zhengcai Yang |
author_sort | Rui Zhao |
collection | DOAJ |
description | Autonomous driving has demonstrated impressive driving capabilities, with behavior decision-making playing a crucial role as a bridge between perception and control. Imitation Learning (IL) and Reinforcement Learning (RL) have introduced innovative approaches to behavior decision-making in autonomous driving, but challenges remain. On one hand, RL’s policy networks often lack sufficient reasoning ability to make optimal decisions in highly complex and stochastic environments. On the other hand, the complexity of these environments leads to low sample efficiency in RL, making it difficult to efficiently learn driving policies. To address these challenges, we propose an innovative Knowledge Distillation-Enhanced Behavior Transformer (KD-BeT) framework. Building on the successful application of Transformers in large language models, we introduce the Behavior Transformer as the policy network in RL, using observation–action history as input for sequential decision-making, thereby leveraging the Transformer’s contextual reasoning capabilities. Using a teacher–student paradigm, we first train a small-capacity teacher model quickly and accurately through IL, then apply knowledge distillation to accelerate RL’s training efficiency and performance. Simulation results demonstrate that KD-BeT maintains fast convergence and high asymptotic performance during training. In the CARLA NoCrash benchmark tests, KD-BeT outperforms other state-of-the-art methods in terms of traffic efficiency and driving safety, offering a novel solution for addressing real-world autonomous driving tasks. |
format | Article |
id | doaj-art-d112e6a2922c4c40aff20308dd51feb5 |
institution | Kabale University |
issn | 1424-8220 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj-art-d112e6a2922c4c40aff20308dd51feb52025-01-10T13:21:12ZengMDPI AGSensors1424-82202025-01-0125119110.3390/s25010191Knowledge Distillation-Enhanced Behavior Transformer for Decision-Making of Autonomous DrivingRui Zhao0Yuze Fan1Yun Li2Dong Zhang3Fei Gao4Zhenhai Gao5Zhengcai Yang6College of Automotive Engineering, Jilin University, Changchun 130025, ChinaCollege of Automotive Engineering, Jilin University, Changchun 130025, ChinaGraduate School of Information and Science Technology, The University of Tokyo, Tokyo 113-8654, JapanDepartment of Mechanical and Aerospace Engineering, Brunel University London, Uxbridge UB8 3PH, UKCollege of Automotive Engineering, Jilin University, Changchun 130025, ChinaCollege of Automotive Engineering, Jilin University, Changchun 130025, ChinaKey Laboratory of Automotive Power Train and Electronics, Hubei University of Automotive Technology, Shiyan 442002, ChinaAutonomous driving has demonstrated impressive driving capabilities, with behavior decision-making playing a crucial role as a bridge between perception and control. Imitation Learning (IL) and Reinforcement Learning (RL) have introduced innovative approaches to behavior decision-making in autonomous driving, but challenges remain. On one hand, RL’s policy networks often lack sufficient reasoning ability to make optimal decisions in highly complex and stochastic environments. On the other hand, the complexity of these environments leads to low sample efficiency in RL, making it difficult to efficiently learn driving policies. To address these challenges, we propose an innovative Knowledge Distillation-Enhanced Behavior Transformer (KD-BeT) framework. Building on the successful application of Transformers in large language models, we introduce the Behavior Transformer as the policy network in RL, using observation–action history as input for sequential decision-making, thereby leveraging the Transformer’s contextual reasoning capabilities. Using a teacher–student paradigm, we first train a small-capacity teacher model quickly and accurately through IL, then apply knowledge distillation to accelerate RL’s training efficiency and performance. Simulation results demonstrate that KD-BeT maintains fast convergence and high asymptotic performance during training. In the CARLA NoCrash benchmark tests, KD-BeT outperforms other state-of-the-art methods in terms of traffic efficiency and driving safety, offering a novel solution for addressing real-world autonomous driving tasks.https://www.mdpi.com/1424-8220/25/1/191imitation learningreinforcement learningbehavior transformerautonomous drivingknowledge distillationdecision-making |
spellingShingle | Rui Zhao Yuze Fan Yun Li Dong Zhang Fei Gao Zhenhai Gao Zhengcai Yang Knowledge Distillation-Enhanced Behavior Transformer for Decision-Making of Autonomous Driving Sensors imitation learning reinforcement learning behavior transformer autonomous driving knowledge distillation decision-making |
title | Knowledge Distillation-Enhanced Behavior Transformer for Decision-Making of Autonomous Driving |
title_full | Knowledge Distillation-Enhanced Behavior Transformer for Decision-Making of Autonomous Driving |
title_fullStr | Knowledge Distillation-Enhanced Behavior Transformer for Decision-Making of Autonomous Driving |
title_full_unstemmed | Knowledge Distillation-Enhanced Behavior Transformer for Decision-Making of Autonomous Driving |
title_short | Knowledge Distillation-Enhanced Behavior Transformer for Decision-Making of Autonomous Driving |
title_sort | knowledge distillation enhanced behavior transformer for decision making of autonomous driving |
topic | imitation learning reinforcement learning behavior transformer autonomous driving knowledge distillation decision-making |
url | https://www.mdpi.com/1424-8220/25/1/191 |
work_keys_str_mv | AT ruizhao knowledgedistillationenhancedbehaviortransformerfordecisionmakingofautonomousdriving AT yuzefan knowledgedistillationenhancedbehaviortransformerfordecisionmakingofautonomousdriving AT yunli knowledgedistillationenhancedbehaviortransformerfordecisionmakingofautonomousdriving AT dongzhang knowledgedistillationenhancedbehaviortransformerfordecisionmakingofautonomousdriving AT feigao knowledgedistillationenhancedbehaviortransformerfordecisionmakingofautonomousdriving AT zhenhaigao knowledgedistillationenhancedbehaviortransformerfordecisionmakingofautonomousdriving AT zhengcaiyang knowledgedistillationenhancedbehaviortransformerfordecisionmakingofautonomousdriving |