Emotion Detection Walkthrough with Microsoft

In this article, we focus on facial emotion recognition with Microsoft. This tool allows computers to identify people's emotions in photos returning the confidence across a set of the basic emotions. We will explore Microsoft Azure Face API with the help of a real practical example, along with a...

Emotion detection is an extremely promising area, and its development is closely connected to achievements in computer vision and artificial intelligence. There are various applications and commercial solutions of emotion detection: from impacting advertising by measuring attention and engagement of marketing campaigns to the security sector, where emotional recognition can allow computer systems to respond automatically to people with a suspicious facial expression. As the use cases of this new discipline are continuously growing and affecting more and more aspects in the corporate sector, many startups, as well as leading companies in this sphere, are making a contribution to emotion detection technology.

 

How emotion detection works

There are several strategies for emotion detection, and the main are through facial expressions from images and videos, through text, and through speech. In the focus of our attention will be facial emotion recognition from images.

The algorithms of facial emotion detection are pretty hard in practice. The groundwork for it is face recognition. In simple words, it starts with collecting information and extracting the features that can be important for further emotion recognition. These are the things like eyebrows, eyes, nose, lips, muscles of the face, etc. Then comes training the model so that it can recognize and classify specific patterns. After that, the model will finally be able to classify new data, which basically means to detect different emotions.

There are numerous complicated aspects, and luckily, there are technologies that have already implemented those algorithms. So, you simply upload an image, and the program analyzes the relationship between critical points, therefore sensing microexpressions of the person’s face. By examining the positions of the points, the program detects the emotions, choosing from the list of the basic ones.

Let’s see how it works in practice.

 

Example

Now we get to the most exciting part - real working example. We will work with Microsoft Azure API and Python SDK and examine several images with different emotions. Let’s go step by step.

 

1. Microsoft Azure account registration

First off, we have to register a new Microsoft Azure account. Go to the Microsoft Azure page and click “Start Free” button.

Screenshot Azure No 1

In the next window, choose your Microsoft account or press “Create one!” if you don’t have it already, and end the registration.

After that, you will be redirected to creating Azure account, which requires the phone number and the credit card verification. Now, you can open the Azure portal, and you should see the next page.

 

Screenshot Azure No 2

 

2. Prices and limitations of Emotion APIs

As we already mentioned, numerous companies have stepped into the process of developing emotion recognition technologies. At this point, we present a short review of the leaders in the industry, the most popular emotion detection APIs, namely Azure, Google, and Amazon. We will focus on prices and limitations of each API.

 

Microsoft Azure Face API

The API is a part of Microsoft Cognitive Services Platform. It detects, verifies, and identifies faces on images and can also recognize the emotions on the faces. The response is in JSON with specific percentages for each face within the 7 core emotions: anger, contempt, disgust, fear, happiness, sadness, surprise and a neutral state.

There are some things that you should consider while using Azure. The higher the quality of the image, the more precise the recognition. The photos you upload should have the size up to 6 MB, and faces are detectable when its size is 36x36 to 4096x4096 pixels. It can return up to 64 faces per image. More details here.

 

There is a free tier, which supports up to 20 transactions per minute and up to 30,000 per month.

For more transactions, you should use the standard Face API tier. It allows 10 transactions per second and the price for it varies due to the number of transactions, starting from $1.00 per 1000 transactions decreasing to $0.40 per 1000 transactions.

The more detailed information is available here.

 

Google Vision API

This API is a part of Google’s Cloud Platform. In the process of image sentiment analysis from faces, it detects joy, sorrow, anger, and surprise likelihood.

The limitations for this API manifests in image file size up to 20 MB, Json request size up to 16 MB, as well as 16 images per request. There are also quotas of 600 requests per minute and 20,000,000 images per feature per month.

For additional information on quotas and limitations, visit the following page.

 

Google API gives you first 1000 units per month for free. The next breakdown is 1001 - 5,000,000 units per month which cost $1.50 per 1000 units. Finally, if you use from 5,000,001 to 20,000,000 units per month, facial detection along with detecting emotional state will cost you $0.60 per 1000 units.

More information here.

 

Amazon Rekognition

In the context of face sentiment analysis, Amazon Rekognition which is a part of Amazon’s AWS Ecosystem is close to Microsoft Azure API. It presents deep learning based image recognition.

 

The AWS API limits the maximum size of the image to 15 MB. Face can be detected if it’s not less than 40x40 pixels on the image with 1920x1080 size. For bigger images, minimum face size should be higher. The minimum pixel resolution for the image is 80 pixels in height and width. All images should be in JPEG or PNG format. You can store up to 20 million faces in one face collection, and APIs returns up to 4096 matching faces.

More about AWS limits here.

 

There is a free tier which allows you to analyze 5,000 images per month and store up to 1,000 face metadata each month, for the first year.

The prices are similar to Azure and vary from $1.00 for the first 1,000,000 images processed per month to $0.40 if you have over 100,000,000 images processed per month for the US-EAST region.

You can find your region and prices here.

 

3. Sending images with different emotion through Python SDK

Azure configuration

Click “Create a resource” button and find “Face” in the search.

Screenshot Azure No 3

Press on the “Face” and in the following window click “Create” and configure the service.

Screenshot Azure No 4

 

Wait some time while the service is creating and you will see it on your dashboard.

Screenshot Azure No 5 - Dashboard

 

Click on the service and it will open the next window. Then go to “Quick start” and choose “Keys”.

Screenshot Azure No 6

 

You should copy the “Key 1” or “Key 2”, you will need them later; however, you can also open “Keys” at any time and copy then.

Screenshot Azure No 7

 

 

Installing SDK

System requirements:

  • Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-87-generic x86_64)
  • Python 3.5.2 (default, Nov 23 2017, 16:37:01)

 

Install Microsoft Face API SDK by using pip:

pip3 install cognitive_face==1.4.1

Verifying if SDK works correctly

Create the emotion_detect.py file on your PC.

 

# Import the Face API SDK
import cognitive_face as Face_API
# Set the keys which are you generated in the Azure service in the previous steps
Face_API.Key.set("INSERT_YOUR_KEY_HERE")

# Set the service URL which defined in the Azure service.

# Because we created the service in the "West Central US" region the link

# will be the next "https://westcentralus.api.cognitive.microsoft.com/face/v1.0"

# Your correct URL you can find in the section two http://joxi.ru/ZrJNPlkSwngkJr.png

Face_API.BaseUrl.set("https://westcentralus.api.cognitive.microsoft.com/face/v1.0")

# Sending the image to the Face API.

# The image can be the URL, file path or a file-like object represents an image.

img_url = "http://joxi.ru/MAjGL4BhjkGoKr.png"

response = Face_API.face.detect(image=img_url)

# Printing the response.

print(response)

 

And run python3 emotion_detect.py. You must get the next response:

[{'faceRectangle': {'top': 96, 'left': 212, 'width': 145, 'height': 145}, 'faceId': '1e8728cf-3dae-40b9-a18b-4db9973a7506'}]

 

 

Note, that in your case, the “faceId” will be different.

 

Emotion Detection

As we already mentioned, the Face API can recognize next emotions: neutral, anger, contempt, disgust, fear, happiness, sadness, and surprise. Let’s modify our detect request.

The attributes must be in the form of the comma-separated string, like "age,emotion". Supported attributes include age, gender, headPose, smile, facialHair, glasses, emotion, makeup, accessories, occlusion, blur, exposure, noise. You can read more about attributes here.

 

Let’s begin with the following image...

Image of a face of a man 

Request:

img_url = "http://joxi.ru/MAjGL4BhjkGoKr.png"
response = Face_API.face.detect(image=img_url, attributes="emotion")

 

The response must be the next:

[{'faceRectangle': {'left': 212, 'top': 96, 'height': 145, 'width': 145}, 'faceAttributes': {'emotion': {'happiness': 1.0, 'contempt': 0.0, 'surprise': 0.0, 'disgust': 0.0, 'fear': 0.0, 'anger': 0.0, 'neutral': 0.0, 'sadness': 0.0}}, 'faceId': '8a7bf136-6d74-4ea7-b84e-78b6b6c75131'}]

 

 

As we can see, the emotion is recognized as happiness which is correct for this image. Now, we can try with different emotions.

 

Surprise emotion

Image of a surprised face

Request:

img_url = "http://joxi.ru/l2Z1POxUz73YWm.png"
response = Face_API.face.detect(image=img_url, attributes="emotion")

 

Response:

[{'faceId': '6c13d54c-6daf-4ac9-add5-3f3f253e72c4', 'faceAttributes': {'emotion': {'neutral': 0.0, 'happiness': 0.0, 'anger': 0.0, 'fear': 0.017, 'surprise': 0.983, 'disgust': 0.0, 'sadness': 0.0, 'contempt': 0.0}}, 'faceRectangle': {'height': 254, 'top': 135, 'width': 254, 'left': 159}}]

 

Neutral emotion

This time we will examine the image with several faces.

Testimage of a couple with neutral emotion

Request:

img_url = "http://joxi.ru/BA0MXPGSMpzY1r.png"
response = Face_API.face.detect(image=img_url, attributes="emotion")

 

Response:

[{'faceId': '139170f0-83c5-4dc1-b1b0-37a69c102f4b', 'faceAttributes': {'emotion': {'fear': 0.0, 'anger': 0.0, 'disgust': 0.0, 'contempt': 0.0, 'happiness': 0.001, 'surprise': 0.0, 'neutral': 0.989, 'sadness': 0.01}}, 'faceRectangle': {'top': 139, 'left': 273, 'height': 110, 'width': 110}}, {'faceId': '99c22f4f-dde9-4979-bf31-2ad093b2828d', 'faceAttributes': {'emotion': {'fear': 0.0, 'anger': 0.0, 'disgust': 0.0, 'contempt': 0.001, 'happiness': 0.0, 'surprise': 0.0, 'neutral': 0.955, 'sadness': 0.044}}, 'faceRectangle': {'top': 180, 'left': 170, 'height': 103, 'width': 103}}]

 

Anger emotion

Testimage of an angry person

Request:

img_url = "http://joxi.ru/Vm6v8P4F4vk9nm.png"
response = Face_API.face.detect(image=img_url, attributes="emotion")

 

Response:

[{'faceId': '01ee5315-4dbb-4e13-97c7-2f80cb971c09', 'faceAttributes': {'emotion': {'contempt': 0.0, 'happiness': 0.0, 'neutral': 0.0, 'anger': 0.762, 'surprise': 0.125, 'sadness': 0.0, 'fear': 0.113, 'disgust': 0.0}}, 'faceRectangle': {'left': 72, 'width': 192, 'top': 119, 'height': 192}}]

 

Now, you can try yourself with the different emotion, attributes, and images.

 

4. Drawing Boxes

As you can see, Face API also returns the “faceRectangle” which response to the area for the face location on the image. Let’s try to draw this area.

Install the necessary package:

pip3 install Pillow==5.2.0 requests==2.18.4

 

Replace the img_url defined in the emotion_detect.py with http://joxi.ru/BA0MXPGSMpzY1r.png and add next code after the previous part of the code.

 

import requests
from io import BytesIO

from PIL import Image, ImageDraw, ImageFont

# Get the best mutch emotion and convert

# to the string like "emotion_name: score".

def get_best_emotion(resp):

    emotions = resp["faceAttributes"]["emotion"]

    b_key, b_value = "", 0

    for key, value in emotions.items():

        if value > b_value:

            b_key, b_value = key, value

    return b_key + ": " + str(b_value)


# Get the face area from the response "faceRectangle".

def get_face_box(resp):

    values = resp["faceRectangle"]

    left = values["left"]

    top = values["top"]

    bottom = left + values["height"]

    right = top + values["width"]

    return ((left, top), (bottom, right))

# Download the image by the url

image_data = requests.get(img_url)

# Opens and identifies the given image file.

image = Image.open(BytesIO(image_data.content))

 

# Creates an object that can be used to draw in the given image.

draw = ImageDraw.Draw(image)

# Load a TrueType or OpenType font from a file or file-like object,

# and create a font object.

fnt = ImageFont.truetype("Ubuntu-B.ttf", size=13)

# Iterate through all responses and draw box and text.

for resp in response:

    # Get the best emotion.

    emotion = get_best_emotion(resp)

    # Get the face box.

    face_box = get_face_box(resp)

    # Draw the text on the image.

    draw.text((face_box[0][0], face_box[0][1] - 12), emotion, fill="purple", font=fnt)

    # Draw box on the image

    draw.rectangle(face_box, outline="purple")

# Now you can display the final image.

image.show()

# Or save it.

image.save("output.png", "PNG")

 

Run the full code and you should get the next output:

Testimage of a couple with neutral emotion

The full code can be found here.

 

Conclusion

Emotion detection is a thriving and actively evolving area. During recent years, many different companies have been working on the development of emotion recognition technologies, and most importantly, many of them have achieved significant results. However, there are still many aspects left to improve.

In this tutorial, we saw in practice how to use Microsoft Azure for emotion detection. We experimented with images with various emotions and a different number of people on the photos and the results sure look promising. Now, it’s your turn!

Any comments or questions? Just log in or register and leave a comment here.

Comments

Caprico report abuse
Helpful tutorial

Want to read on?

Find other interesting ... Show more