Online learning and fusion of orientation appearance models for robust rigid object tracking

We present a robust framework for learning and fusing different modalities for rigid object tracking. Our method fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depths cameras such as the Kinect. To combine these two completely different modalitie...

Full description

Bibliographic Details
Main Authors: Marras, Ioannis, Medina, Joan Alabort, Tzimiropoulos, Georgios, Zafeiriou, Stefanos, Pantic, Maja
Format: Conference or Workshop Item
Published: 2013
Online Access:https://eprints.nottingham.ac.uk/31431/
_version_ 1848794200060985344
author Marras, Ioannis
Medina, Joan Alabort
Tzimiropoulos, Georgios
Zafeiriou, Stefanos
Pantic, Maja
author_facet Marras, Ioannis
Medina, Joan Alabort
Tzimiropoulos, Georgios
Zafeiriou, Stefanos
Pantic, Maja
author_sort Marras, Ioannis
building Nottingham Research Data Repository
collection Online Access
description We present a robust framework for learning and fusing different modalities for rigid object tracking. Our method fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depths cameras such as the Kinect. To combine these two completely different modalities, we propose to use features that do not depend on the data representation: angles. More specifically, our method combines image gradient orientations as extracted from intensity images with the directions of surface normal computed from dense depth fields provided by the Kinect. To incorporate these features in a learning framework, we use a robust kernel based on the Euler representation of angles. This kernel enables us to cope with gross measurement errors, missing data as well as typical problems in visual tracking such as illumination changes and occlusions. Additionally, the employed kernel can be efficiently implemented online. Finally, we propose to capture the correlations between the obtained orientation appearance models using a fusion approach motivated by the original AAM. Thus the proposed learning and fusing framework is robust, exact, computationally efficient and does not require off-line training. By combining the proposed models with a particle filter, the proposed tracking framework achieved robust performance in very difficult tracking scenarios including extreme pose variations.
first_indexed 2025-11-14T19:12:24Z
format Conference or Workshop Item
id nottingham-31431
institution University of Nottingham Malaysia Campus
institution_category Local University
last_indexed 2025-11-14T19:12:24Z
publishDate 2013
recordtype eprints
repository_type Digital Repository
spelling nottingham-314312020-05-04T20:20:35Z https://eprints.nottingham.ac.uk/31431/ Online learning and fusion of orientation appearance models for robust rigid object tracking Marras, Ioannis Medina, Joan Alabort Tzimiropoulos, Georgios Zafeiriou, Stefanos Pantic, Maja We present a robust framework for learning and fusing different modalities for rigid object tracking. Our method fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depths cameras such as the Kinect. To combine these two completely different modalities, we propose to use features that do not depend on the data representation: angles. More specifically, our method combines image gradient orientations as extracted from intensity images with the directions of surface normal computed from dense depth fields provided by the Kinect. To incorporate these features in a learning framework, we use a robust kernel based on the Euler representation of angles. This kernel enables us to cope with gross measurement errors, missing data as well as typical problems in visual tracking such as illumination changes and occlusions. Additionally, the employed kernel can be efficiently implemented online. Finally, we propose to capture the correlations between the obtained orientation appearance models using a fusion approach motivated by the original AAM. Thus the proposed learning and fusing framework is robust, exact, computationally efficient and does not require off-line training. By combining the proposed models with a particle filter, the proposed tracking framework achieved robust performance in very difficult tracking scenarios including extreme pose variations. 2013 Conference or Workshop Item PeerReviewed Marras, Ioannis, Medina, Joan Alabort, Tzimiropoulos, Georgios, Zafeiriou, Stefanos and Pantic, Maja (2013) Online learning and fusion of orientation appearance models for robust rigid object tracking. In: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2013), 22-26 April 2013, Shanghai, China. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6553798
spellingShingle Marras, Ioannis
Medina, Joan Alabort
Tzimiropoulos, Georgios
Zafeiriou, Stefanos
Pantic, Maja
Online learning and fusion of orientation appearance models for robust rigid object tracking
title Online learning and fusion of orientation appearance models for robust rigid object tracking
title_full Online learning and fusion of orientation appearance models for robust rigid object tracking
title_fullStr Online learning and fusion of orientation appearance models for robust rigid object tracking
title_full_unstemmed Online learning and fusion of orientation appearance models for robust rigid object tracking
title_short Online learning and fusion of orientation appearance models for robust rigid object tracking
title_sort online learning and fusion of orientation appearance models for robust rigid object tracking
url https://eprints.nottingham.ac.uk/31431/
https://eprints.nottingham.ac.uk/31431/