Face Recognition Systems
Most face recognition systems focus on specific features on the face and make a two-dimensional map of the face. Newer systems make three-dimensional maps. The systems capture facial images from video cameras and generate templates that are stored and used for comparisons. Face recognition is a fairly young technology compared with other biometrics like fingerprints.
One face recognition technology, referred to as local feature analysis, looks at specific parts of the face that do not change significantly over time, such as:
- Upper sections of eye sockets
- Area surrounding cheek bones
- Sides of mouth
- Distance between eyes.
Data such as the distance between the eyes, the length of the nose, or the angle of the chin contribute collectively to the template.
A second method of face recognition is called the eigenface method. It looks at the face as a whole. A collection of face images is used to generate a two-dimensional gray-scale image to produce the biometric template.
Facial scans are only as good as the environment in which they are collected. The so-called mug-shot environment is ideal. The best scans are produced under controlled conditions with proper lighting and proper placement of the video device. As part of a highly sensitive security environment, there may be several cameras collecting image data from different angles, producing a more exact scan sample. Certain facial scanning applications also include tests for liveness, such as blinking eyes. Testing for liveness reduces the chance that the person requesting access is using a photograph of an authorized individual.
Facial recognition, like all biometrics, produces results based on probabilities. Once the live scan is performed and compared with the template database, positive identifications are produced according to the level of accuracy set in the system. If the system is set to accept only a match that is determined to be 100 percent accurate, with no margin of error, the rejection rate increases dramatically. As accuracy variables decrease below 100 percent, rejection rates decrease likewise. Facial recognition is generally subject to larger margins of error than more established biometrics, such as fingerprint recognition. Financial institutions considering the use of face recognition for customer authentication should carefully evaluate the adverse consequences of an unacceptably high FAR or FRR.
Facial scanning is considered one of the easiest biometrics to use. A portable web cam sitting on a desktop computer will suffice. The connecting system must be able to support the web cam and must be loaded with software to create the template and communicate with the authenticating system. The technique is nonintrusive, and user acceptance is typically high.
Identification and verification of a person's identity are two generic application areas of face recognition systems. In identification applications, an algorithm identifies an unknown face in an image by searching through an electronic mugbook. In verification applications, an algorithm confirms the claimed identity of a particular face. Proposed applications have the potential to impact all aspects of everyday life by controlling access to physical and information facilities, confirming identities for legal and commercial transactions, and controlling the flow of citizens at borders. For face recognition systems to be successfully fielded, one has to be able to evaluate their performance. To evaluate an algorithm, its behavior is scored on a test set of matchable images in a mugbook known as the Gallery. One computes a similarity matrix that quantifies the proximities of images of a subset of the Gallery (called the Probe set) to each image in the Gallery.
Large collections of test images are already in existence (FERET/Army Research Lab/ George Mason Univ./93-96) or currently undergoing development (Human ID/DARPA/99-04). These databases (which include IR, still, video, and hyperspectral images of the face, gait, and iris of thousands of human subjects) provide the Human ID research community with de facto database standards for algorithm development and comparison.
A first, simple approach is to limit the comparisons to replicated same-face match scores, transform the scores from the multiple algorithm outputs to a common scale, and examine ranking's and clustering's produced by application of standard Multiple Comparisons procedures, e.g., Student-Newman-Keuls. A useful common scaling is achieved by Probability Integral Transform-ing (PIT) each algorithm's scores using knowledge of its characteristic score EDF based on larger heterogeneous (FERET) experiments, then applying the inverse Gaussian cumulative distribution. Application of this procedure to a sizable extract of the FERET database yields a credible ranking of 15 algorithms dated 1996-1997.
An extension to a mixture of same-subject and different-subject match scores can be achieved by use of ordinary 2-dimensional MultiDimensional Scaling (MDS). MDS translates similarity matrices into pictorial maps with matrix row/column headers converted into mapped locations with appropriate inter-location distances. A good algorithm should cluster same-subject images and cleanly discriminate among different-subject images. The ability to discriminate, and tightness of clusters as quantified, e.g., by circumscribed Voronoi ellipse aspect ratios, can be used to rank algorithm performance. Demonstration tests against small-scale FERET extracts show this clearly.
While -1-PIT and use of Multiple Comparisons and MDS have the advantage of retaining the ratio scale of the original similarity scores, much of the work already published and currently being done in this area makes use of rank statistics. Researchers are exploring multiple properties and statistics derived from the use of partial rank correlations (PRC). This involves extending the known distributional theory for PRC's based on Kendall and Spearman statistics and applying them to the study of interesting dependency patterns among different algorithms. Loosely, the ID community recognizes that most current algorithms perform most reasonably in scoring true (close) matches and (far) dramatically disparate non-matches: i.e., algorithms perform best at the far ends of the performance scale. It is commonly presumed that enhanced understanding of algorithmic performance (and the dual issue of image difficulty) will come from pushing in at either end of the match/nonmatch performance scale. The application of nonparametric dependence via copula theory to partial rank co-occurrences seems to hold promise for enhanced understanding here.
|Join the GlobalSecurity.org mailing list|