Achieving robust face recognition from video by combining a weak photometric model and a learnt generic face invariant

Oggie Arandelovic*, Roberto Cipolla

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In spite of over two decades of intense research, illumination and pose invariance remain prohibitively challenging aspects of face recognition for most practical applications. The objective of this work is to recognize faces using video sequences both for training and recognition input, in a realistic, unconstrained setup in which lighting, pose and user motion pattern have a wide variability and face images are of low resolution. The central contribution is an illumination invariant, which we show to be suitable for recognition from video of loosely constrained head motion. In particular there are three contributions: (i) we show how a photometric model of image formation can be combined with a statistical model of generic face appearance variation to exploit the proposed invariant and generalize in the presence of extreme illumination changes; (ii) we introduce a video sequence re-illumination algorithm to achieve fine alignment of two video sequences; and (iii) we use the smoothness of geodesically local appearance manifold structure and a robust same-identity likelihood to achieve robustness to unseen head poses. We describe a fully automatic recognition system based on the proposed method and an extensive evaluation on 323 individuals and 1474 video sequences with extreme illumination, pose and head motion variation. Our system consistently achieved a nearly perfect recognition rate (over 99.7% on all four databases).

Original languageEnglish
Pages (from-to)9-23
Number of pages15
JournalPattern Recognition
Volume46
Issue number1
DOIs
Publication statusPublished - Jan 2013

Keywords

  • Generic
  • Illumination
  • Invariance
  • Manifold
  • Motion
  • Pose

Fingerprint

Dive into the research topics of 'Achieving robust face recognition from video by combining a weak photometric model and a learnt generic face invariant'. Together they form a unique fingerprint.

Cite this