Lecture: Alice O'Toole (Dallas): Understanding Face Representations in Deep Convolutional Neural Networks
Alice O'Toole (University of Texas at Dallas, School of Behavioral and Brain Sciences):
Understanding face representations in Deep Convolutional Neural Networks
Computer-based face recognition has improved substantially in recent years. Machines, circa 2005, competed favorably with humans recognizing faces in images taken under variable illumination and across changes in facial expression and appearance. By 2010, machines performed nearly as well as humans in all but the most challenging cases. However, these early algorithms were incapable of recognizing faces that were not frontally posed. The development of deep learning and convolutional neural networks (DCNNs) in 2012 abruptly changed the state-of-the-art for machine based face recognition, making recognition across even large changes of viewpoint possible. These networks are trained commonly with millions of images of thousands of people. The number of computations between an image and the “top-level” face representation in a DCNN is typically on the order of 10’s of millions. It is not surprising, therefore, that researchers do understand the nature of the face representations computed by DCNNs. In this talk, I will review a series of human-machine comparisons for face identification that take us from the previous generation of algorithms up to DCNNs. I will present computational studies from my lab that are aimed at understanding how the feature codes at the top layers of state-of-the-art DCNNs support face recognition across a wide range of photometric and person variations, including changes in view. These codes may offer an interesting insight into how people recognize familiar faces.
Host: Heinrich Bülthoff
- Max Planck Institute for Biological Cybernetics
- Max Planck Institute for Intelligent Systems