We present a quantitative evaluation of an algorithm for model-based face recognition. The algorithm actively learns how individual faces vary through video sequences, providing on-line suppression of confounding factors such as expression, lighting a...
We present a quantitative evaluation of an algorithm for model-based face recognition. The algorithm actively learns how individual faces vary through video sequences, providing on-line suppression of confounding factors such as expression, lighting and pose. By actively decoupling sources of image variation, the algorithm provides a framework in which identity evidence can be integrated over a sequence. We demonstrate that face recognition can be considerably improved by the analysis of video sequences. The method presented is widely applicable in many multi-class interpretation problems.