First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework

Existing programmable simulators enable researchers to customize different driving scenarios to conduct in-lab automotive driver simulations. However, software-based simulators for cognitive research generate and maintain their scenes with the support of 3D engines, which may affect users' expe...

Full description

Bibliographic Details
Main Authors: Song, Zili, Wang, Shuolei, Kong, Weikai, Peng, Xiangjun, Sun, Xu
Format: Book Section
Language:English
Published: Association for Computing Machinery 2019
Subjects:
Online Access:https://eprints.nottingham.ac.uk/60666/
_version_ 1848799792124133376
author Song, Zili
Wang, Shuolei
Kong, Weikai
Peng, Xiangjun
Sun, Xu
author_facet Song, Zili
Wang, Shuolei
Kong, Weikai
Peng, Xiangjun
Sun, Xu
author_sort Song, Zili
building Nottingham Research Data Repository
collection Online Access
description Existing programmable simulators enable researchers to customize different driving scenarios to conduct in-lab automotive driver simulations. However, software-based simulators for cognitive research generate and maintain their scenes with the support of 3D engines, which may affect users' experiences to a certain degree since they are not sufficiently realistic. Now, a critical issue is the question of how to build scenes into real-world ones. In this paper, we introduce the first step in utilizing video-to-video synthesis, which is a deep learning approach, in OpenDS framework, which is an open-source driving simulator software, to present simulated scenes as realistically as possible. Off-line evaluations demonstrated promising results from our study, and our future work will focus on how to merge them appropriately to build a close-to-reality, real-time driving simulator.
first_indexed 2025-11-14T20:41:17Z
format Book Section
id nottingham-60666
institution University of Nottingham Malaysia Campus
institution_category Local University
language English
last_indexed 2025-11-14T20:41:17Z
publishDate 2019
publisher Association for Computing Machinery
recordtype eprints
repository_type Digital Repository
spelling nottingham-606662020-05-21T06:47:07Z https://eprints.nottingham.ac.uk/60666/ First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework Song, Zili Wang, Shuolei Kong, Weikai Peng, Xiangjun Sun, Xu Existing programmable simulators enable researchers to customize different driving scenarios to conduct in-lab automotive driver simulations. However, software-based simulators for cognitive research generate and maintain their scenes with the support of 3D engines, which may affect users' experiences to a certain degree since they are not sufficiently realistic. Now, a critical issue is the question of how to build scenes into real-world ones. In this paper, we introduce the first step in utilizing video-to-video synthesis, which is a deep learning approach, in OpenDS framework, which is an open-source driving simulator software, to present simulated scenes as realistically as possible. Off-line evaluations demonstrated promising results from our study, and our future work will focus on how to merge them appropriately to build a close-to-reality, real-time driving simulator. Association for Computing Machinery 2019-09-21 Book Section PeerReviewed application/pdf en cc_by https://eprints.nottingham.ac.uk/60666/1/First%20Attempt%20to%20Build%20Realistic%20Driving%20Scenes%20using%20Video-to-video%20Synthesis%20in%20OpenDS%20Framework.pdf Song, Zili, Wang, Shuolei, Kong, Weikai, Peng, Xiangjun and Sun, Xu (2019) First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework. In: Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings. Association for Computing Machinery, Utrecht, Netherlands, pp. 387-391. ISBN 9781450369206 Video Synthesis; Driving Simulator; Machine Learning http://dx.doi.org/10.1145/3349263.3351497 doi:10.1145/3349263.3351497 doi:10.1145/3349263.3351497
spellingShingle Video Synthesis; Driving Simulator; Machine Learning
Song, Zili
Wang, Shuolei
Kong, Weikai
Peng, Xiangjun
Sun, Xu
First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework
title First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework
title_full First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework
title_fullStr First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework
title_full_unstemmed First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework
title_short First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework
title_sort first attempt to build realistic driving scenes using video-to-video synthesis in opends framework
topic Video Synthesis; Driving Simulator; Machine Learning
url https://eprints.nottingham.ac.uk/60666/
https://eprints.nottingham.ac.uk/60666/
https://eprints.nottingham.ac.uk/60666/