Public transport route optimization with reinforcement learning

High transportation demand due to a large population has resulted in traffic congestion problems in cities, which can be addressed through public transport. However, unbalanced passenger demands and traffic conditions can affect the performance of buses. The stop-skipping strategy effectively distri...

Full description

Bibliographic Details
Main Author: Tay, Bee Sim
Format: Final Year Project / Dissertation / Thesis
Published: 2023
Subjects:
Online Access:http://eprints.utar.edu.my/5721/
http://eprints.utar.edu.my/5721/1/ET_1804019_FYP_report_%2D_BEE_SIM_TAY.pdf
_version_ 1848886487217602560
author Tay, Bee Sim
author_facet Tay, Bee Sim
author_sort Tay, Bee Sim
building UTAR Institutional Repository
collection Online Access
description High transportation demand due to a large population has resulted in traffic congestion problems in cities, which can be addressed through public transport. However, unbalanced passenger demands and traffic conditions can affect the performance of buses. The stop-skipping strategy effectively distributes passenger demand while minimizing bus operating costs if the operator can adapt to changes in passenger demands and traffic conditions. Therefore, this project proposes a deep reinforcement learning-based public transport route optimization where the agent can acquire the optimal strategy by interacting with the dynamic bus environment. This project aims to maximize the passenger satisfaction levels while minimizing bus operator expenditures. Thus, the dynamic bus environment is designed based on a bus optimization scheme that comprises one express bus followed by one no-skip bus to serve stranded passengers due to skipped actions. The reward function is formulated as a function of passenger demand, in-vehicle time, bus running time and passenger waiting time. which is used to train the double deep Q-network (DDQN) agent. Simulation results show that the agent can intelligently skip stations and outperform the conventional method under different passenger distribution patterns. The DDQN approach yields the best performance in the static passenger demand scenario, followed by the scenario with dynamic passenger demands according to time, and lastly the randomly distributed passenger demand scenario. Future studies should consider the load constraints of buses and other factors, such as bus utilization rate, to improve the performance of stop-skipping services for passengers and operators. Real-life passenger data could be incorporated into the DRL model using Internet of Things technology (IoT) for route optimization.
first_indexed 2025-11-15T19:39:16Z
format Final Year Project / Dissertation / Thesis
id utar-5721
institution Universiti Tunku Abdul Rahman
institution_category Local University
last_indexed 2025-11-15T19:39:16Z
publishDate 2023
recordtype eprints
repository_type Digital Repository
spelling utar-57212023-07-07T14:38:22Z Public transport route optimization with reinforcement learning Tay, Bee Sim TK Electrical engineering. Electronics Nuclear engineering High transportation demand due to a large population has resulted in traffic congestion problems in cities, which can be addressed through public transport. However, unbalanced passenger demands and traffic conditions can affect the performance of buses. The stop-skipping strategy effectively distributes passenger demand while minimizing bus operating costs if the operator can adapt to changes in passenger demands and traffic conditions. Therefore, this project proposes a deep reinforcement learning-based public transport route optimization where the agent can acquire the optimal strategy by interacting with the dynamic bus environment. This project aims to maximize the passenger satisfaction levels while minimizing bus operator expenditures. Thus, the dynamic bus environment is designed based on a bus optimization scheme that comprises one express bus followed by one no-skip bus to serve stranded passengers due to skipped actions. The reward function is formulated as a function of passenger demand, in-vehicle time, bus running time and passenger waiting time. which is used to train the double deep Q-network (DDQN) agent. Simulation results show that the agent can intelligently skip stations and outperform the conventional method under different passenger distribution patterns. The DDQN approach yields the best performance in the static passenger demand scenario, followed by the scenario with dynamic passenger demands according to time, and lastly the randomly distributed passenger demand scenario. Future studies should consider the load constraints of buses and other factors, such as bus utilization rate, to improve the performance of stop-skipping services for passengers and operators. Real-life passenger data could be incorporated into the DRL model using Internet of Things technology (IoT) for route optimization. 2023 Final Year Project / Dissertation / Thesis NonPeerReviewed application/pdf http://eprints.utar.edu.my/5721/1/ET_1804019_FYP_report_%2D_BEE_SIM_TAY.pdf Tay, Bee Sim (2023) Public transport route optimization with reinforcement learning. Final Year Project, UTAR. http://eprints.utar.edu.my/5721/
spellingShingle TK Electrical engineering. Electronics Nuclear engineering
Tay, Bee Sim
Public transport route optimization with reinforcement learning
title Public transport route optimization with reinforcement learning
title_full Public transport route optimization with reinforcement learning
title_fullStr Public transport route optimization with reinforcement learning
title_full_unstemmed Public transport route optimization with reinforcement learning
title_short Public transport route optimization with reinforcement learning
title_sort public transport route optimization with reinforcement learning
topic TK Electrical engineering. Electronics Nuclear engineering
url http://eprints.utar.edu.my/5721/
http://eprints.utar.edu.my/5721/1/ET_1804019_FYP_report_%2D_BEE_SIM_TAY.pdf