Detecting Human Faces in Color Images

J. Cai, A. Goshtasby, and C. Yu

This work is funded by NSF.

1998 Int'l Workshop on Multi-Media Database Management Systems.


A method is developed that detects human faces in color images by first separating skin regions from nonskin regions and then locating faces within skin regions. A chroma chart is prepared via a training process that shows likelihoods of different colors representing the skin. Using the chroma chart, a color image is transformed into a gray scale image with the gray values of the pixels showing the likelihoods of the pixels belonging to the skin. By processing the gray scale image, skin regions are discriminated from nonskin regions. Then, using the luminance component of colors, faces are located within skin regions.

Fig. 1. (a) The chroma chart showing 2,300 skin samples obtained from a wide range of skin colors in 80 images. The horizontal axis shows the a and the vertical axis shows the b components of chroma. This chart is of size 500 by 700. (b) Centering a Gaussian of height 1.0 and standard deviation 13.0 at each sample point and finding the sum of the Gaussians at the chart entries produces this image. The brighter the intensity at an entry the higher the likelihood that the color corresponding to that entry represents the skin. (c) Overlaying of (a) and (b).

Fig. 2. A color image containing a face. (b) The gray scale image corresponding to the color image, with the gray values showing the likelihoods of the pixels representing the skin. (c)-(g) The gray scale image thresholded at 0.5, 0.4, 0.3, 0.2, and 0.1, respectively. (h) The optimal skin region. The process for detecting only one skin region is shown here.

Fig. 3. (a) Averaging 16 frontal-view faces after being overlaid such that the centers of the eyes and the mouths coincid. (b) Averaging 16 side-view faces after being overlaid such that the centers of the eyes, mouths, and ears coincid. (c) The central portion of image (a). This will be used as the frontal model of the face. (d) Central portion of image (b). (e) Reflection of image (d) with respect to the vertical axis. (d) and (e) will be used as side-view models of the face.

Fig. 4. (a) No facial features detected in a skin region. (b) One facial feature detected in a skin region. i, ii, and iii show cases where the detected feature correspond to either one of the eyes or the mouth. iv shows the case where the detected feature does not correspond to an eye or a mouth. (c) Possible situations where two features are obtained in a skin region.

Fig. 5. Color images showing faces of different sizes and races.

Fig. 6. Skin-likelihood images obtained for images in Fig. 5 using the chroma chart of Fig. 1b.

Fig. 7. The skin-likelihood images of Fig. 6 automatically segmented according to the method outlined in Section 3 to extract the skin regions.

Fig. 8. Best-match poses of the frontal-view model face with faces in skin regions.

Fig. 9. Detected faces are enclosed in squares with centers at the tip of the nose of the model face when matching faces in the images. If cross-correlation between the model and the faces in images is larger than 0.5, the square in shown in white. Otherwise, the square is shown in gray scale, with gray values proportional to the correlation coefficients between the mode and the faces in the images.


[WSU Home Page] [CSE Department Home Page]

For more information contact A. Goshtasby (agoshtas@cs.wright.edu).

Last modified: 6/8/98.