Abstract
The self-quotient image is a biologically inspired representation which has been proposed as an illumination invariant feature for automatic face recognition. Owing to the lack of strong domain specific assumptions underlying this representation, it can be readily extracted from raw images irrespective of the persons's pose, facial expression etc. What makes the self-quotient image additionally attractive is that it can be computed quickly and in a closed form using simple low-level image operations. However, it is generally accepted that the self-quotient is insufficiently robust to large illumination changes which is why it is mainly used in applications in which low precision is an acceptable compromise for high recall (e.g. retrieval systems). Yet, in this paper we demonstrate that the performance of this representation in challenging illuminations has been greatly underestimated. We show that its error rate can be reduced by over an order of magnitude, without any changes to the representation itself. Rather, we focus on the manner in which the dissimilarity between two self-quotient images is computed. By modelling the dominant sources of noise affecting the representation, we propose and evaluate a series of different dissimilarity measures, the best of which reduces the initial error rate of 63.0% down to only 5.7% on the notoriously challenging YaleB data set.
Original language | English |
---|---|
Title of host publication | 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2013 |
DOIs | |
Publication status | Published - 2013 |
Event | 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2013 - Shanghai, China Duration: 22 Apr 2013 → 26 Apr 2013 |
Conference
Conference | 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2013 |
---|---|
Country/Territory | China |
City | Shanghai |
Period | 22/04/13 → 26/04/13 |