Face verification—validating that it is the claimed person

To confirm whether the result of the prediction is reliable or it should be taken as an unknown person, we perform face verification (also referred to as face authentication) to obtain a confidence metric showing whether the single face image is similar to the claimed person (as opposed to face identification, which we just performed, comparing the single face image with many people).

OpenCV's FaceRecognizer class can return a confidence metric when you call the predict() function, but unfortunately the confidence metric is simply based on the distance in eigen-subspace, so it is not very reliable. The method we will use is to reconstruct the facial image using the eigenvectors and Eigenvalues, and compare this reconstructed image with the input image. If the person had many of their faces included in the training set, then the reconstruction should work quite well from the learned eigenvectors and Eigenvalues, but if the person did not have any faces in the training set (or did not have any that have similar lighting and facial expressions to the test image), then the reconstructed face will look very different from the input face, signaling that it is probably an unknown face.

Remember we said earlier that the Eigenfaces and Fisherfaces algorithms are based on the notion that an image can be roughly represented as a set of eigenvectors (special face images) and Eigenvalues (blending ratios). So if we combine all the eigenvectors with the Eigenvalues from one of the faces in the training set, then we should obtain a fairly close replica of that original training image. The same applies with other images that are similar to the training set; if we combine the trained eigenvectors with the Eigenvalues from a similar test image, we should be able to reconstruct an image that is somewhat a replica of the test image.

Once again, OpenCV's FaceRecognizer class makes it quite easy to generate a reconstructed face from any input image, by using the subspaceProject() function to project onto the eigenspace and the subspaceReconstruct() function to go back from the eigenspace to the image space. The trick is that we need to convert it from a floating-point row matrix to a rectangular 8-bit image (like we did when displaying the average face and Eigenfaces), but we don't want to normalize the data, as it is already in the ideal scale to compare with the original image. If we normalized the data, it would have a different brightness and contrast from the input image, and it would become difficult to compare the image similarity just by using the L2 relative error. This is done as follows:

    // Get some required data from the FaceRecognizer model. 
Mat eigenvectors = model->get<Mat>("eigenvectors");
Mat averageFaceRow = model->get<Mat>("mean");

// Project the input image onto the eigenspace.
Mat projection = subspaceProject(eigenvectors, averageFaceRow,
preprocessedFace.reshape(1,1));

// Generate the reconstructed face back from the eigenspace.
Mat reconstructionRow = subspaceReconstruct(eigenvectors,
averageFaceRow, projection);

// Make it a rectangular shaped image instead of a single row.
Mat reconstructionMat = reconstructionRow.reshape(1,
faceHeight);

// Convert the floating-point pixels to regular 8-bit uchar.
Mat reconstructedFace = Mat(reconstructionMat.size(), CV_8U);
reconstructionMat.convertTo(reconstructedFace, CV_8U, 1, 0);

The following screenshot shows two typical reconstructed faces. The face on the left-hand side was reconstructed well because it was from a known person, whereas the face on the right-hand side was reconstructed badly because it was from an unknown person, or a known person but with unknown lighting conditions/facial expression/face direction:

We can now calculate how similar this reconstructed face is to the input face by using the getSimilarity() function we created previously for comparing two images, where a value less than 0.3 implies that the two images are very similar. For Eigenfaces, there is one eigenvector for each face, so reconstruction tends to work well, and therefore we can typically use a threshold of 0.5, but Fisherfaces has just one eigenvector for each person, so reconstruction will not work as well, and therefore it needs a higher threshold, say 0.7. This is done as follows:

    similarity = getSimilarity(preprocessedFace, reconstructedFace); 
if (similarity > UNKNOWN_PERSON_THRESHOLD) {
identity = -1; // Unknown person.
}

Now, you can just print the identity to the console, or use it wherever your imagination takes you! Remember that this face recognition method and this face verification method are only reliable in the conditions that you train them for. So to obtain good recognition accuracy, you will need to ensure that the training set of each person covers the full range of lighting conditions, facial expressions, and angles that you expect to test with. The face preprocessing stage helped reduce some differences with lighting conditions and in-plane rotation (if the person tilts their head toward their left or right shoulder), but for other differences, such as out-of-plane rotation (if the person turns their head toward the left-hand side or right-hand side), it will only work if it is covered well in your training set.