Download PDFOpen PDF in browser

Reinforcement Learning Agent under Partial Observability for Traffic Light Control in Presence of Gridlocks

19 pagesPublished: August 13, 2019

Abstract

Bangkok is notorious for its chronic traffic congestion due to the rapid urbanization and the haphazard city plan. The Sathorn Road network area stands to be one of the most critical areas where gridlocks are a normal occurrence during rush hours. This stems from the high volume of demand imposed by the dense geographical placement of 3 big educational institutions and the insufficient link capacity with strict routes. Current solutions place heavy reliance on human traffic control expertises to prevent and disentangle gridlocks by consecutively releasing each queue length spillback through inter-junction coordination. A calibrated dataset of the Sathorn Road network area in a microscopic road traffic simulation package SUMO (Simulation of Urban MObility) is provided in the work of Chula-Sathorn SUMO Simulator (Chula-SSS). In this paper, we aim to utilize the Chula-SSS dataset with extended vehicle flows and gridlocks in order to further optimize the present traffic signal control policies with reinforcement learning approaches by an artificial agent. Reinforcement learning has been successful in a variety of domains over the past few years. While a number of researches exist on using reinforcement learning with adaptive traffic light control, existing studies often lack pragmatic considerations concerning application to the physical world especially for the traffic system infrastructure in developing countries, which suffer from constraints imposed from economic factors. The resultant limitation of the agent’s partial observability of the whole network state at any specific time is imperative and cannot be overlooked. With such partial observability constraints, this paper has reported an investigation on applying the Ape-X Deep Q-Network agent at the critical junction in the morning rush hours from 6 AM to 9 AM with practically occasional presence of gridlocks. The obtainable results have shown a potential value of the agent’s ability to learn despite physical limitations in the traffic light control at the considered intersection within the Sathorn gridlock area. This suggests a possibility of further investigations on agent applicability in trying to mitigate complex interconnected gridlocks in the future.

Keyphrases: gridlock, partial observability, Reinforcement Learning, traffic light control

In: Melanie Weber, Laura Bieker-Walz, Robert Hilbrich and Michael Behrisch (editors). SUMO User Conference 2019, vol 62, pages 29--47

Links:
BibTeX entry
@inproceedings{SUMO2019:Reinforcement_Learning_Agent_under,
  author    = {Thanapapas Horsuwan and Chaodit Aswakul},
  title     = {Reinforcement Learning Agent under Partial Observability for Traffic Light Control in Presence of Gridlocks},
  booktitle = {SUMO User Conference 2019},
  editor    = {Melanie Weber and Laura Bieker-Walz and Robert Hilbrich and Michael Behrisch},
  series    = {EPiC Series in Computing},
  volume    = {62},
  pages     = {29--47},
  year      = {2019},
  publisher = {EasyChair},
  bibsource = {EasyChair, https://easychair.org},
  issn      = {2398-7340},
  url       = {https://easychair.org/publications/paper/nsT5},
  doi       = {10.29007/bdgn}}
Download PDFOpen PDF in browser