Multi-sensor fusion and deep learning framework for automatic human activity detection and health monitoring using motion sensor data / Henry Friday Nweke

Human activity detection through fusion of multimodal sensors are vital steps to achieve automatic and comprehensive monitoring of human behaviours, build smart home systems and detect sports activities. In addition, human activity detection methods have wide applications in security, surveillance a...

Full description

Bibliographic Details
Main Author: Henry Friday , Nweke
Format: Thesis
Published: 2019
Subjects:
Online Access:http://studentsrepo.um.edu.my/11162/
http://studentsrepo.um.edu.my/11162/1/Henry.pdf
http://studentsrepo.um.edu.my/11162/2/Henry_Friday.pdf
Description
Summary:Human activity detection through fusion of multimodal sensors are vital steps to achieve automatic and comprehensive monitoring of human behaviours, build smart home systems and detect sports activities. In addition, human activity detection methods have wide applications in security, surveillance and postural detection to prevent falls in elderly. The proliferation of sensor embedded devices such as wearable sensors, ambient environments and smartphones have significantly facilitated automatic and ubiquitous collection of sensor data for analysis of human activities details. Over the years, various machine learning methods have been proposed to analyse collected sensor data to infer certain activity details. However, analysis of mobile and wearable sensor data for human activity detection is still very challenging. This is further worsen by the use of single sensors modality and machine learning algorithms. Furthermore, developing robust and efficient methods are required to handle issues such as orientation and position displacement, sensor fusion and feature incompatibility, automatic feature representation, and how to minimize intra-class similarity and inter-class variability. Hence, to solve the above issues, different objectives were formulated. First, to investigate existing multi-sensor and automatic feature extraction methods for human activity detection and health monitoring using motion sensor. Second, to propose multi-sensor fusion based on multiple classifier system to reduce activity misrecognition and feature incompatibility. Third, to propose orientation invariant based deep spare autoencoder methods for automatic complex activity identification to minimize orientation inconsistencies and learn adequate data patterns. Furthermore, to confirm the performances of the proposed multi-sensor fusion methods using challenging motion sensor data generated using smartphones and wearable sensors. Finally, compare the performances with existing multi-sensor fusion and feature extraction methods for human activity detection and health monitoring. Experimental results demonstrate the capability of the proposed multi-sensor fusion through multiple classifier systems and orientation invariant based deep learning methods for human activity detection and health monitoring. In the first objective that utilize multi-sensor fusion and multiple classifier systems, the proposed method improves 3% to 27% over single sensor modality and feature-level fusion. In the second method utilizing deep learning and orientation invariant features for human activity detection. The proposed automatic feature representation method outperforms existing methods and obtain 97.3%, 97.13% and 99.76% accuracy, recall and sensitivity respectively. In addition, the proposed automatic feature representation method achieved average detection accuracy between 2% to 51% compared to existing methods. The proposed methods can be implemented for accurate monitoring and early detection of activity details using wide range of sensors in mobile and wearable devices.