Face2Multi-modal: in-vehicle multi-modal predictors via facial expressions

Towards intelligent Human-Vehicle Interaction systems and innovative Human-Vehicle Interaction designs, in-vehicle drivers' physiological data has been explored as an essential data source. However, equipping multiple biosensors is considered the limited extent of user-friendliness and impracti...

Full description

Bibliographic Details
Main Authors: Huang, Zhentao, Li, Rongze, Jin, Wangkai, Song, Zilin, Zhang, Yu, Peng, Xiangjun, Sun, Xu
Format: Book Section
Language:English
Published: 2020
Subjects:
Online Access:https://eprints.nottingham.ac.uk/63624/
_version_ 1848800042553442304
author Huang, Zhentao
Li, Rongze
Jin, Wangkai
Song, Zilin
Zhang, Yu
Peng, Xiangjun
Sun, Xu
author_facet Huang, Zhentao
Li, Rongze
Jin, Wangkai
Song, Zilin
Zhang, Yu
Peng, Xiangjun
Sun, Xu
author_sort Huang, Zhentao
building Nottingham Research Data Repository
collection Online Access
description Towards intelligent Human-Vehicle Interaction systems and innovative Human-Vehicle Interaction designs, in-vehicle drivers' physiological data has been explored as an essential data source. However, equipping multiple biosensors is considered the limited extent of user-friendliness and impractical during the driving procedure. The lack of a proper approach to access physiological data has hindered wider applications of advanced biosignal-driven designs in practice (e.g. monitoring systems and etc.). Hence, the demand for a user-friendly approach to measuring drivers' body statuses has become more intense. In this Work-In-Progress, we present Face2Multi-modal, an In-vehicle multi-modal Data Streams Predictors through facial expressions only. More specifically, we have explored the estimations of Heart Rate, Skin Conductance, and Vehicle Speed of the drivers. We believe Face2Multi-modal provides a user-friendly alternative to acquiring drivers' physiological status and vehicle status, which could serve as the building block for many current or future personalized Human-Vehicle Interaction designs. More details and updates about the project Face2Multi-modal is online at https://github.com/unnc-ucc/Face2Multimodal/.
first_indexed 2025-11-14T20:45:16Z
format Book Section
id nottingham-63624
institution University of Nottingham Malaysia Campus
institution_category Local University
language English
last_indexed 2025-11-14T20:45:16Z
publishDate 2020
recordtype eprints
repository_type Digital Repository
spelling nottingham-636242020-10-27T02:49:36Z https://eprints.nottingham.ac.uk/63624/ Face2Multi-modal: in-vehicle multi-modal predictors via facial expressions Huang, Zhentao Li, Rongze Jin, Wangkai Song, Zilin Zhang, Yu Peng, Xiangjun Sun, Xu Towards intelligent Human-Vehicle Interaction systems and innovative Human-Vehicle Interaction designs, in-vehicle drivers' physiological data has been explored as an essential data source. However, equipping multiple biosensors is considered the limited extent of user-friendliness and impractical during the driving procedure. The lack of a proper approach to access physiological data has hindered wider applications of advanced biosignal-driven designs in practice (e.g. monitoring systems and etc.). Hence, the demand for a user-friendly approach to measuring drivers' body statuses has become more intense. In this Work-In-Progress, we present Face2Multi-modal, an In-vehicle multi-modal Data Streams Predictors through facial expressions only. More specifically, we have explored the estimations of Heart Rate, Skin Conductance, and Vehicle Speed of the drivers. We believe Face2Multi-modal provides a user-friendly alternative to acquiring drivers' physiological status and vehicle status, which could serve as the building block for many current or future personalized Human-Vehicle Interaction designs. More details and updates about the project Face2Multi-modal is online at https://github.com/unnc-ucc/Face2Multimodal/. 2020-09-30 Book Section PeerReviewed application/pdf en cc_by https://eprints.nottingham.ac.uk/63624/1/Face2Multi-modal.pdf Huang, Zhentao, Li, Rongze, Jin, Wangkai, Song, Zilin, Zhang, Yu, Peng, Xiangjun and Sun, Xu (2020) Face2Multi-modal: in-vehicle multi-modal predictors via facial expressions. In: 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM . UNSPECIFIED, pp. 30-33. ISBN 9781450380669 Human-Vehicle Interactions; Computer Vision; Ergonomics http://dx.doi.org/10.1145/3409251.3411716 10.1145/3409251.3411716 10.1145/3409251.3411716 10.1145/3409251.3411716
spellingShingle Human-Vehicle Interactions; Computer Vision; Ergonomics
Huang, Zhentao
Li, Rongze
Jin, Wangkai
Song, Zilin
Zhang, Yu
Peng, Xiangjun
Sun, Xu
Face2Multi-modal: in-vehicle multi-modal predictors via facial expressions
title Face2Multi-modal: in-vehicle multi-modal predictors via facial expressions
title_full Face2Multi-modal: in-vehicle multi-modal predictors via facial expressions
title_fullStr Face2Multi-modal: in-vehicle multi-modal predictors via facial expressions
title_full_unstemmed Face2Multi-modal: in-vehicle multi-modal predictors via facial expressions
title_short Face2Multi-modal: in-vehicle multi-modal predictors via facial expressions
title_sort face2multi-modal: in-vehicle multi-modal predictors via facial expressions
topic Human-Vehicle Interactions; Computer Vision; Ergonomics
url https://eprints.nottingham.ac.uk/63624/
https://eprints.nottingham.ac.uk/63624/
https://eprints.nottingham.ac.uk/63624/