Deep reinforcement learning for dynamic vehicle routing with demand and traffic uncertainty

The capacitated vehicle routing problem with dynamic demand and traffic conditions presents significant challenges in logistics and supply chain optimization. Traditional methods often fail to adapt to real-time uncertainties in customer demand and traffic patterns or scale to large problem instance...

Full description

Saved in:
Bibliographic Details
Main Authors: Shirali Kadyrov, Azamkhon Azamov, Yelbek Abdumajitov, Cemil Turan
Format: Article
Language:English
Published: Elsevier 2025-12-01
Series:Operations Research Perspectives
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2214716025000272
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849393648085499904
author Shirali Kadyrov
Azamkhon Azamov
Yelbek Abdumajitov
Cemil Turan
author_facet Shirali Kadyrov
Azamkhon Azamov
Yelbek Abdumajitov
Cemil Turan
author_sort Shirali Kadyrov
collection DOAJ
description The capacitated vehicle routing problem with dynamic demand and traffic conditions presents significant challenges in logistics and supply chain optimization. Traditional methods often fail to adapt to real-time uncertainties in customer demand and traffic patterns or scale to large problem instances. In this work, we propose a deep reinforcement learning framework to learn adaptive routing policies for dynamic capacitated vehicle routing problem environments with stochastic demand and traffic. Our approach integrates graph neural networks to encode spatial problem structure and proximal policy optimization to train robust policies under both demand and traffic uncertainty. Experiments on synthetic grid-based routing environments show that our method outperforms classical heuristics and greedy baselines in minimizing travel cost while maintaining feasibility. The learned policies generalize to unseen demand and traffic scenarios and scale to larger graphs than those seen during training. Our results highlight the potential of deep reinforcement learning for real-world dynamic routing problems where both demand and traffic evolve unpredictably.
format Article
id doaj-art-8cadfdbc7a0148f499d38d1fc0e9fa27
institution Kabale University
issn 2214-7160
language English
publishDate 2025-12-01
publisher Elsevier
record_format Article
series Operations Research Perspectives
spelling doaj-art-8cadfdbc7a0148f499d38d1fc0e9fa272025-08-20T03:40:21ZengElsevierOperations Research Perspectives2214-71602025-12-011510035110.1016/j.orp.2025.100351Deep reinforcement learning for dynamic vehicle routing with demand and traffic uncertaintyShirali Kadyrov0Azamkhon Azamov1Yelbek Abdumajitov2Cemil Turan3New Uzbekistan University, Movarounnahr 1, Tashkent, 100000, Uzbekistan; Corresponding author.New Uzbekistan University, Movarounnahr 1, Tashkent, 100000, UzbekistanNew Uzbekistan University, Movarounnahr 1, Tashkent, 100000, UzbekistanSDU University, Abylai Khan 1/1, Kaskelen, 040900, KazakhstanThe capacitated vehicle routing problem with dynamic demand and traffic conditions presents significant challenges in logistics and supply chain optimization. Traditional methods often fail to adapt to real-time uncertainties in customer demand and traffic patterns or scale to large problem instances. In this work, we propose a deep reinforcement learning framework to learn adaptive routing policies for dynamic capacitated vehicle routing problem environments with stochastic demand and traffic. Our approach integrates graph neural networks to encode spatial problem structure and proximal policy optimization to train robust policies under both demand and traffic uncertainty. Experiments on synthetic grid-based routing environments show that our method outperforms classical heuristics and greedy baselines in minimizing travel cost while maintaining feasibility. The learned policies generalize to unseen demand and traffic scenarios and scale to larger graphs than those seen during training. Our results highlight the potential of deep reinforcement learning for real-world dynamic routing problems where both demand and traffic evolve unpredictably.http://www.sciencedirect.com/science/article/pii/S2214716025000272Deep reinforcement learningVehicle Routing ProblemDemand uncertaintyTraffic uncertaintyGraph neural networksProximal policy optimization
spellingShingle Shirali Kadyrov
Azamkhon Azamov
Yelbek Abdumajitov
Cemil Turan
Deep reinforcement learning for dynamic vehicle routing with demand and traffic uncertainty
Operations Research Perspectives
Deep reinforcement learning
Vehicle Routing Problem
Demand uncertainty
Traffic uncertainty
Graph neural networks
Proximal policy optimization
title Deep reinforcement learning for dynamic vehicle routing with demand and traffic uncertainty
title_full Deep reinforcement learning for dynamic vehicle routing with demand and traffic uncertainty
title_fullStr Deep reinforcement learning for dynamic vehicle routing with demand and traffic uncertainty
title_full_unstemmed Deep reinforcement learning for dynamic vehicle routing with demand and traffic uncertainty
title_short Deep reinforcement learning for dynamic vehicle routing with demand and traffic uncertainty
title_sort deep reinforcement learning for dynamic vehicle routing with demand and traffic uncertainty
topic Deep reinforcement learning
Vehicle Routing Problem
Demand uncertainty
Traffic uncertainty
Graph neural networks
Proximal policy optimization
url http://www.sciencedirect.com/science/article/pii/S2214716025000272
work_keys_str_mv AT shiralikadyrov deepreinforcementlearningfordynamicvehicleroutingwithdemandandtrafficuncertainty
AT azamkhonazamov deepreinforcementlearningfordynamicvehicleroutingwithdemandandtrafficuncertainty
AT yelbekabdumajitov deepreinforcementlearningfordynamicvehicleroutingwithdemandandtrafficuncertainty
AT cemilturan deepreinforcementlearningfordynamicvehicleroutingwithdemandandtrafficuncertainty