First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework

Existing programmable simulators enable researchers to customize different driving scenarios to conduct in-lab automotive driver simulations. However, software-based simulators for cognitive research generate and maintain their scenes with the support of 3D engines, which may affect users' expe...

Full description

Bibliographic Details
Main Authors: Song, Zili, Wang, Shuolei, Kong, Weikai, Peng, Xiangjun, Sun, Xu
Format: Book Section
Language:English
Published: Association for Computing Machinery 2019
Subjects:
Online Access:https://eprints.nottingham.ac.uk/60666/
Description
Summary:Existing programmable simulators enable researchers to customize different driving scenarios to conduct in-lab automotive driver simulations. However, software-based simulators for cognitive research generate and maintain their scenes with the support of 3D engines, which may affect users' experiences to a certain degree since they are not sufficiently realistic. Now, a critical issue is the question of how to build scenes into real-world ones. In this paper, we introduce the first step in utilizing video-to-video synthesis, which is a deep learning approach, in OpenDS framework, which is an open-source driving simulator software, to present simulated scenes as realistically as possible. Off-line evaluations demonstrated promising results from our study, and our future work will focus on how to merge them appropriately to build a close-to-reality, real-time driving simulator.