Face detection with OpenCV in Python

Face detection

Face detection, as the name suggests, refers to detecting faces from images or videos and is one of the commonest computer vision tasks. Face detection is a precursor to many advanced tasks such as emotion detection, interest detection, surprise detection, etc. it is also the first step in developing face recognition systems.

 Various algorithms have been developed for face recognition tasks. However, for this project, we will use the Viola-Jones Algorithm (https://bit.ly/3mVUZYe) for object detection. It is a very simple algorithm and can detect objects such as faces in images with very high accuracy.

What is OpenCV?

 OpenCV (https://opencv.org/) stands for Open Computer Vision Library and is one of the oldest yet, frequently used computer vision libraries. OpenCV was initially developed in C++. However, you will be using the Python wrapper for OpenCV in this project. The good thing about the Python wrapper for OpenCV is that it comes with trained instances of the Viola-Jones algorithm for detecting face, lip, smile, body, etc., and you do not have to implement the Viola-Jones algorithm yourself. 

To install OpenCV for Python, execute the following script on your command terminal: 

pip install opencv-python

 Next, you will see how you can detect the faces, eyes, and smiles of humans using the Viola-Jones algorithm implemented in OpenCV wrapper for Python. So, let’s begin without much ado.

Installing the Libraries and Importing Images

Let’s import the required libraries first. 

Script 1: 

 1. # pip install opencv-python 
 2. import cv2
 3.
 4. import matplotlib.pyplot as plt 
 5. %matplotlib inline 

For detecting face, eyes, and lips, we will be using two images. One image contains a single person, and the other image contains multiple persons. Both the images are available in the face_images folder inside the Datasets directory in the GitHub and SharePoint repositories. 

Let’s import both the images first. To do so, you can use the imread() function of the OpenCV library and pass it the image path.

Script 2:

 1. image1 = cv2.imread(r”E:/Datasets/face_images/image1.jpg”, 0) 
 2. image2 = cv2.imread(r”E:/Datasets/face_images/image
2.jpg”, 0) 

The following script displays image1 in grayscale form. 

Script 3:

plt.imshow(image1, cmap =”gray”) 

Output:

Face detection

Let’s now try to detect the face in the above image.

How to Detecting Whole Faces ?

you will be using the Viola-Jones algorithm for object detection in order to detect face, eyes, and lips. The trained algorithms are installed with OpenCV. To access the XML files containing the algorithm, you need to pass the path of the algorithm to the CascadeClassifier() method of OpenCV. To find the path of XML files containing the algorithm, execute the following script. 

Script 4: 

1. cv2.data.haarcascades 

In the output, you will see a path to the haarcascade files for the Viola-Jones algorithm. 

Output:

C:\ProgramData\Anaconda3\Lib\site-packages\cv2\data

If you go to the path that contains your haar cascade files, you should see the following files and directories: 

Face detection  files and

For face detection, initially, you will be using the “haarcascade_frontalface_default.xml” file. To import the corresponding algorithm contained by the file, execute the following script.

 Script 5: 

1. face_detector = cv2.CascadeClassifier(cv2.data.haarcascades + ‘haarcascade_frontalface_default.xml’) 

Next, you need to define a method, which accepts an image. To detect a face inside that image, you need to call the detectMultiscale() method of the face detector object that you initialized in Once the face is detected, you need to create a rectangle around the face. To do so, you need the x and y components of the face area and the width and height of the face. Using that information, you can create a rectangle by calling the rectangle method of the OpenCV object. Finally, the image with a rectangle around the detected face is returned by the function. The detect_face() method in the following script performs these tasks. 

Script 6:

 1. def detect_face (image): 
 2.
 3.          face_image = image.copy()
 4. 
 5.          face_rectangle = face_detector.detectMultiScale(face_image) 
 6.
 7.  for (x,y,width,height) in face_rectangle:
 8.       cv2.rectangle(face_image, (x,y), (x + width,     y+height), (255,255,255), 8) 
 9.
 10. return face_image

 To detect the face, simply pass the face object to the detect_face() method that you defined in Script 6. The following script passes image1 to the detect_face() method. 

Script 7: 

1. detection_result = detect_face(image1) 

Finally, to plot the image with face detection, pass the image returned by the detect_face() method to the imshow() method of the OpenCV module, as shown below. 

Script 8: 

1. plt.imshow(detection_result, cmap = “gray”)

In the following output, you can see that the face has been detected successfully in the image.

Output:

Face detection success

Let’s now try to detect faces from image2, which contains faces of nine persons. Execute the following script:

 Script 9:

 1. detection_result = detect_face(image2) 
 2. plt.imshow(detection_result, cmap = “gray”) 

The output below shows that out of nine persons in the image, the faces of six persons are detected successfully.

Output:

smile Face detection

OpenCV contains other classifiers as well for face detection. For instance, in the following script, we define a detect_face() method, which uses the “haarcascade_frontalface_alt.xml” classifier for face detection. The following script tries to detect faces in image2.

 Script 10:

1. face_detector = cv2.CascadeClassifier(cv
2.data.haarcascades + ‘haarcascade_frontalface_alt.xml’) 2. 
3. def detect_face (image): 
4. 
5. face_image = image.copy()
6.
7. face_rectangle = face_detector.detectMultiScale(face_image) 
8. 
9. for (x,y,width,height) in face_rectangle: 
10. cv2.rectangle(face_image, (x,y), (x + width, y+height), (255,255,255), 8) 
11. 
12. return face_image 
13.
14. detection_result = detect_face(image2) 
15. plt.imshow(detection_result, cmap = “gray”)

The output below shows that now 7 out of 9 images are detected which means that “haarcascade_frontalface_alt” classifier performed better than “haarcascade_frontalface_default” classifier. 

Output:

Face detection group

Finally, let’s use another face detection classifier i.e. “haarcascade_frontalface_tree” to see how many faces can it detect in “image2”.

 Script 11: 

1. face_detector = cv2.CascadeClassifier(cv2.data.haarcascades + ‘haarcascade_frontalface_alt_tree.xml’) 
2. 
3. def detect_face (image):
4. 
5. face_image = image.copy() 
6. 
7.    face_rectangle = face_detector.detectMultiScale(face_image) 
8. 
9. for (x,y,width,height) in face_rectangle: 
10. cv2.rectangle(face_image, (x,y), (x + width, y+height), (255,255,255), 8) 
11. 
12. return face_image 
13. 
14. detection_result = detect_face(image2) 
15. plt.imshow(detection_result, cmap = “gray”) 

The output shows that “haarcascade_frontalface_tree” only detects three faces with default settings.

 correct Face detection group

How to Detecting Eyes?

 In addition to detecting faces, you can detect eyes in a face as well. To do so, you need the haarcascade_eye classifier. The following script creates an object of haarcascade_eye classifier. 

Script 12: 

1. eye_detector = cv2.CascadeClassifier(cv2.data.haarcascades + ‘haarcascade_eye.xml’)

 And the following script defines the detect_eye() method, which detects eyes

from a face and then plots rectangles around eyes. 

Script 13: 

1. def detect_eye (image): 
2. 
3. face_image = image.copy() 
4. 
5. face_rectangle = eye_detector.detectMultiScale(face_image) 
6. 
7. for (x,y,width,height) in face_rectangle:
8. cv2.rectangle(face_image, (x,y), (x + width, y+height), (255,255,255), 8) 
9. 
10. return face_image 

Finally, the following script passes image1 to the detect_eye() method. 

Script 14: 

1. detection_result = detect_eye(image1) 

The image returned by the detect_eye()method is plotted via the following script.

Script 15: 

1. plt.imshow(detection_result, cmap = “gray”) 

From the output below, you can see that the eyes have been successfully detected from the image1.

eye detection

The following script tries to detect eyes inside the faces in image2. 

Script 16: 

1. detection_result = detect_eye(image2)
2. plt.imshow(detection_result, cmap = “gray”) 

The output below shows that in addition to detecting eyes, some other portions of the face have also been wrongly detected as eyes. 

Output:

groups eye  detaction

To avoid detecting extra objects in addition to the desired objects, you need to update the values of the scaleFactor and minNeigbours attributes of the detectMultiScale() method of various haarcascade classifier objects. For instance, to avoid detecting extra eyes in image2, you can update the detectMultiScale() method of the eye_detector object of the haarcascade_eye classifier, as follows.Here, we set the value of scaleFactor to 1.2 and the value of minNeighbors to 4. 

Script 17: 

1. def detect_eye (image): 
2. 
3. face_image = image.copy() 
4. 
5. face_rectangle = eye_detector.detectMultiScale(face_image, scaleFactor = 1.2, minNeighbors =4) 
6. 
7. for (x,y,width,height) in face_rectangle: 
8. cv2.rectangle(face_image, (x,y), (x + width, y+height), (255,255,255), 8) 
9. 
10. return face_image 

Basically, the scaleFactor is used to create your scale pyramid. Your model has a fixed size specified during training, which is visible in the xml. Hence, if this size of the face is present in the image, it is detected. By rescaling the input image, however, a larger face can be resized to a smaller one, making it detectable by the algorithm.

The minNeighbors attribute specifies the number neighbors that each candidate rectangle should have in order to retain it. This parameter directly affects the quality of the detected faces. Higher values result in fewer detections but with higher quality. 

There are no hard and fast rules for setting values for scaleFactor and minNeigbours attributes. You can play around with different values and select the ones that give you the best object detection results. Let’s now again try to detect eyes in image2 using modified values of the scaleFactor and minNeigbours attributes.

 Script 18: 

1. detection_result = detect_eye(image2) 
2. plt.imshow(detection_result, cmap = “gray”) 

The output shows that though there are still a few extra detections, however, the detections are still better than before.

groups eye correct detection

How to Detecting Smile? 

You can also detect a smile within an image using OpenCV implementation of the Viola-Jones algorithm for smile detection. To do so, you can use the haarcascade_smile classifier, as shown in the following script. 

Script 19: 

1. smile_detector = cv2.CascadeClassifier(cv2.data.haarcascades + ‘haarcascade_smile.xml’)

Next, we define a method detect_smile(), which detects smiles in the input image and draws rectangles around smiles. 

Script 20: 


1. def detect_smile (image): 
2. 
3.     face_image = image.copy() 
4. 
5.     face_rectangle = smile_detector .detectMultiScale(face_image) 
6.
 7. for (x,y,width,height) in face_rectangle: 
8.         cv2.rectangle(face_image, (x,y), (x + width, y+height), (255,255,255), 8) 
9. 
10. return face_image 

Script 21: 

1. detection_result = detect_smile(image1) 
2. plt.imshow(detection_result, cmap = “gray”)

The output below shows that we have plenty of extra detections. Hence, we need to adjust the values of the scaleFactor and minNeigbours attributes. 

Output: 

smile detecting

Modify the detect_smile() method as follows: 

Script 22:

1. def detect_smile (image):
2. 
3. face_image = image.copy() 
4. 
5. face_rectangle = smile_detector.detectMultiScale(face_image, scaleFactor = 2.0, minNeighbors =20) 
6. 
7. for (x,y,width,height) in face_rectangle:
8. cv2.rectangle(face_image, (x,y), (x + width, y+height), (255,255,255), 8) 
9. 
10. return face_image 

Now, try to detect the smile in image1 using the following script: 

Script 23: 

1. detection_result = detect_smile(image1) 
2. plt.imshow(detection_result, cmap = “gray”) 

You will get this output. You can see that all the extra detections have now been removed, and only the lips are detected for a smile. 

Output:

single image smile detecting

Finally, let’s try to detect the lips in image2. Execute the following script:

 Script 24: 

1. detection_result = detect_smile(image2) 
2. plt.imshow(detection_result, cmap = “gray”) 

The output shows that the lips of most of the people are detected. Output:

groups image smile detecting

Face Detection from Live Videos

 Since videos are essentially multiple frames of images, you can use the Viola-Jones Classifier to detect faces in videos. Let’s first define the detect_face() method, which uses the “haarcascade_frontalface_default” face detection classifier to detect faces and draw a rectangle around the face. 

Script 25: 

1. face_detector = cv2.CascadeClassifier(cv2.data.haarcascades + ‘haarcascade_frontalface_default.xml’) 
2. 
3. def detect_face (image): 
4. 
5. face_image = image.copy() 
6. 
7. face_rectangle = face_detector.detectMultiScale(face_image)
8. 
9. for (x,y,width,height) in face_rectangle: 
10. cv2.rectangle(face_image, (x,y), (x + width, y+height), (255,255,255), 8) 
11.
12. return face_image 

Next, to capture a video from your system camera, you can use the VideoCapture object of OpenCV and pass it 0 as a parameter. Next, to read the current frame, pass 0 as a parameter to the read() 

method of the VideoCapture object. The detected frame is passed to the detect_face() method, and the detected face bounded by a rectangle is displayed in the output. This process continues until you press the key “q.” 

Script 26: 

1. live_cam = cv2.VideoCapture(0)
2. 
3. while True:
4. ret, current_frame = live_cam.read(0) 
5. 
6. current_frame = detect_face(current_frame) 
7. 
8. cv2.imshow(“Face detected”, current_frame) 
9. 
10. key = cv2.waitKey(50) 
11. if key == ord(“q”): 
12. break 
13. 
14. live_cam.release() 
15. cv2.destroyAllWindows() 

the output for detecting faces from videos.