— Ch. 1 · Origins And Development —
DeepFace.
~4 min read · Ch. 1 of 6
Facebook's artificial intelligence research team released DeepFace in 2015. The project emerged from a group of scientists including Yaniv Taigman and Ming Yang. They joined Facebook after the company acquired Face.com in 2012. Lior Wolf, a faculty member at Tel Aviv University, also contributed to the effort. The system was trained on four million images uploaded by Facebook users. This massive dataset allowed the algorithm to learn facial patterns with unprecedented scale. Facebook began rolling out the technology to its users early that year. The team stated the goal was not to invade privacy but to alert individuals when their face appeared in photos. Users could then choose to remove their face from any image.
Technical Architecture
DeepFace employs a nine-layer neural network containing over 120 million connection weights. An image passes through four distinct modules before generating a result. First comes 2D alignment which detects six fiducial points like eye centers and nose tips. Next follows 3D alignment using a generic model with 67 anchor points. Frontalization then warps the image so the subject looks directly forward. Finally the neural network processes these corrections into a 4096-dimensional feature vector. This vector represents the face mathematically for comparison against known identities. A convolutional layer sits at the start followed by max pooling and locally connected layers. The output allows systems to identify faces by finding the most similar vector in a database. In 2014 researchers added an extra fully connected layer to classify images among 4030 possible persons seen during training.