Using dynamic time warping for multi-sensor fusion

Fusion is a fundamental human process that occurs in some form at all levels of sense organs such as visual and sound information received from eyes and ears respectively, to the highest levels of decision making such as our brain fuses visual and sound information to make decisions. Multi-sensor da...

Full description

Bibliographic Details
Main Author: Ko, Ming Hsiao
Format: Thesis
Language:English
Published: Curtin University 2009
Subjects:
Online Access:http://hdl.handle.net/20.500.11937/384
_version_ 1848743363106308096
author Ko, Ming Hsiao
author_facet Ko, Ming Hsiao
author_sort Ko, Ming Hsiao
building Curtin Institutional Repository
collection Online Access
description Fusion is a fundamental human process that occurs in some form at all levels of sense organs such as visual and sound information received from eyes and ears respectively, to the highest levels of decision making such as our brain fuses visual and sound information to make decisions. Multi-sensor data fusion is concerned with gaining information from multiple sensors by fusing across raw data, features or decisions. The traditional frameworks for multi-sensor data fusion only concern fusion at specific points in time. However, many real world situations change over time. When the multi-sensor system is used for situation awareness, it is useful not only to know the state or event of the situation at a point in time, but also more importantly, to understand the causalities of those states or events changing over time.Hence, we proposed a multi-agent framework for temporal fusion, which emphasises the time dimension of the fusion process, that is, fusion of the multi-sensor data or events derived over a period of time. The proposed multi-agent framework has three major layers: hardware, agents, and users. There are three different fusion architectures: centralized, hierarchical, and distributed, for organising the group of agents. The temporal fusion process of the proposed framework is elaborated by using the information graph. Finally, the core of the proposed temporal fusion framework – Dynamic Time Warping (DTW) temporal fusion agent is described in detail.Fusing multisensory data over a period of time is a challenging task, since the data to be fused consists of complex sequences that are multi–dimensional, multimodal, interacting, and time–varying in nature. Additionally, performing temporal fusion efficiently in real–time is another challenge due to the large amount of data to be fused. To address these issues, we proposed the DTW temporal fusion agent that includes four major modules: data pre-processing, DTW recogniser, class templates, and decision making. The DTW recogniser is extended in various ways to deal with the variability of multimodal sequences acquired from multiple heterogeneous sensors, the problems of unknown start and end points, multimodal sequences of the same class that hence has different lengths locally and/or globally, and the challenges of online temporal fusion.We evaluate the performance of the proposed DTW temporal fusion agent on two real world datasets: 1) accelerometer data acquired from performing two hand gestures, and 2) a benchmark dataset acquired from carrying a mobile device and performing pre-defined user scenarios. Performance results of the DTW based system are compared with those of a Hidden Markov Model (HMM) based system. The experimental results from both datasets demonstrate that the proposed DTW temporal fusion agent outperforms HMM based systems, and has the capability to perform online temporal fusion efficiently and accurately in real–time.
first_indexed 2025-11-14T05:44:22Z
format Thesis
id curtin-20.500.11937-384
institution Curtin University Malaysia
institution_category Local University
language English
last_indexed 2025-11-14T05:44:22Z
publishDate 2009
publisher Curtin University
recordtype eprints
repository_type Digital Repository
spelling curtin-20.500.11937-3842017-02-20T06:40:17Z Using dynamic time warping for multi-sensor fusion Ko, Ming Hsiao dyanamic time warping (DTW) users hardware agents distributed decision making multi-sensor data fusion sense organs centralized hierarchical multi-agent framework fusion brain fusion data pre-processing Fusion is a fundamental human process that occurs in some form at all levels of sense organs such as visual and sound information received from eyes and ears respectively, to the highest levels of decision making such as our brain fuses visual and sound information to make decisions. Multi-sensor data fusion is concerned with gaining information from multiple sensors by fusing across raw data, features or decisions. The traditional frameworks for multi-sensor data fusion only concern fusion at specific points in time. However, many real world situations change over time. When the multi-sensor system is used for situation awareness, it is useful not only to know the state or event of the situation at a point in time, but also more importantly, to understand the causalities of those states or events changing over time.Hence, we proposed a multi-agent framework for temporal fusion, which emphasises the time dimension of the fusion process, that is, fusion of the multi-sensor data or events derived over a period of time. The proposed multi-agent framework has three major layers: hardware, agents, and users. There are three different fusion architectures: centralized, hierarchical, and distributed, for organising the group of agents. The temporal fusion process of the proposed framework is elaborated by using the information graph. Finally, the core of the proposed temporal fusion framework – Dynamic Time Warping (DTW) temporal fusion agent is described in detail.Fusing multisensory data over a period of time is a challenging task, since the data to be fused consists of complex sequences that are multi–dimensional, multimodal, interacting, and time–varying in nature. Additionally, performing temporal fusion efficiently in real–time is another challenge due to the large amount of data to be fused. To address these issues, we proposed the DTW temporal fusion agent that includes four major modules: data pre-processing, DTW recogniser, class templates, and decision making. The DTW recogniser is extended in various ways to deal with the variability of multimodal sequences acquired from multiple heterogeneous sensors, the problems of unknown start and end points, multimodal sequences of the same class that hence has different lengths locally and/or globally, and the challenges of online temporal fusion.We evaluate the performance of the proposed DTW temporal fusion agent on two real world datasets: 1) accelerometer data acquired from performing two hand gestures, and 2) a benchmark dataset acquired from carrying a mobile device and performing pre-defined user scenarios. Performance results of the DTW based system are compared with those of a Hidden Markov Model (HMM) based system. The experimental results from both datasets demonstrate that the proposed DTW temporal fusion agent outperforms HMM based systems, and has the capability to perform online temporal fusion efficiently and accurately in real–time. 2009 Thesis http://hdl.handle.net/20.500.11937/384 en Curtin University fulltext
spellingShingle dyanamic time warping (DTW)
users
hardware
agents
distributed
decision making
multi-sensor data fusion
sense organs
centralized
hierarchical
multi-agent framework
fusion
brain fusion
data pre-processing
Ko, Ming Hsiao
Using dynamic time warping for multi-sensor fusion
title Using dynamic time warping for multi-sensor fusion
title_full Using dynamic time warping for multi-sensor fusion
title_fullStr Using dynamic time warping for multi-sensor fusion
title_full_unstemmed Using dynamic time warping for multi-sensor fusion
title_short Using dynamic time warping for multi-sensor fusion
title_sort using dynamic time warping for multi-sensor fusion
topic dyanamic time warping (DTW)
users
hardware
agents
distributed
decision making
multi-sensor data fusion
sense organs
centralized
hierarchical
multi-agent framework
fusion
brain fusion
data pre-processing
url http://hdl.handle.net/20.500.11937/384