Highly accurate and fully automatic 3D head pose estimation and eye gaze estimation using RGB-D sensors and 3D morphable models

Reza Shoja Ghiass, Ognjen Arandjelovic, Denis Laurendeau*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)
3 Downloads (Pure)

Abstract

This work addresses the problem of automatic head pose estimation and its application in 3D gaze estimation using low quality RGB-D sensors without any subject cooperation or manual intervention. The previous works on 3D head pose estimation using RGB-D sensors require either an offline step for supervised learning or 3D head model construction, which may require manual intervention or subject cooperation for complete head model reconstruction. In this paper, we propose a 3D pose estimator based on low quality depth data, which is not limited by any of the aforementioned steps. Instead, the proposed technique relies on modeling the subject's face in 3D rather than the complete head, which, in turn, relaxes all of the constraints in the previous works. The proposed method is robust, highly accurate and fully automatic. Moreover, it does not need any offline step. Unlike some of the previous works, the method only uses depth data for pose estimation. The experimental results on the Biwi head pose database confirm the efficiency of our algorithm in handling large pose variations and partial occlusion. We also evaluated the performance of our algorithm on IDIAP database for 3D head pose and eye gaze estimation.

Original languageEnglish
Article number4280
Number of pages21
JournalSensors
Volume18
Issue number12
DOIs
Publication statusPublished - 5 Dec 2018

Keywords

  • 3D morphable models
  • 3D head pose estimation
  • 3D eye gaze estimation
  • Iterative closest point
  • RGB-D sensors

Fingerprint

Dive into the research topics of 'Highly accurate and fully automatic 3D head pose estimation and eye gaze estimation using RGB-D sensors and 3D morphable models'. Together they form a unique fingerprint.

Cite this