I recently blogged about creating a Mood Detector using Lobe, I wondered what other options were available for face analysis which led to me embarking on a journey of ramping up on Azure Cognitive Services, more specifically the Face Service, which has some really cool capabilities.
I used my trusty Raspberry Pi (with attached camera) and developed a Flask application using Python, however rather than using the Face client library for Python, I opted to go for the REST API so that the code is a little more portable.
I created a Flask app that does the following:
- Takes a picture
- Submits this picture to the REST API endpoint for the Face Service
- Returns the detected age, gender, hair colour and a list of potential emotions (with a score for each) – the Face Service can detect/analyse multiple faces, so I hardcoded it to return the results from the first face detected
An example of the app in action can be found below – the screenshot below is of the results page, as you can there is a reason that I’m not a front-end dev! I was most impressed by the fact that the Face Service thinks that I’m 8 years younger than I actually am 😊. It also correctly detected my emotion (smiling).

The code for this app can be found at – Blog-Samples/Face Analysis at main · brendankarl/Blog-Samples (github.com).
To run this you’ll need:
- A Raspberry Pi with attached camera (I used a Pi 4, but older models should work too)
- An Azure subscription with an Azure Cognitive Services resource provisioned
- Simply copy the code from the GitHub repo, update the url and key variable and execute the command below in a terminal (from the directory where the code is)
sudo python3 FaceAnalysis.py
Below is the FaceAnalysis.py code for reference.
from flask import Flask, render_template
from picamera import PiCamera
from time import sleep
import os
import random
import requests
import json
app = Flask(__name__)
@app.route('/')
def button():
return render_template("button.html") # Presents a HTML page with a button to take a picture
@app.route('/takepic')
def takepic():
currentdir = os.getcwd()
randomnumber = random.randint(1,100) # A random number is created for a query string used when presenting the picture taken, this is to avoid web browser caching of the image.
camera = PiCamera()
camera.start_preview()
sleep(2)
camera.capture(str(currentdir) + "/static/image.jpg") # Take a pic and store in the static directory used by Flask
camera.close()
url = "https://uksouth.api.cognitive.microsoft.com/face/v1.0/detect" # Replace with the Azure Cognitive Services endpoint for the Face API (depends on the region deployed to)
key = "" # Azure Cogntivie Services key
image_path = str(currentdir) + "/static/image.jpg"
image_data = open(image_path, "rb").read()
headers = {"Ocp-Apim-Subscription-Key" : key,'Content-Type': 'application/octet-stream'}
params = {
'returnFaceId': 'false',
'returnFaceLandmarks': 'false',
'returnFaceAttributes': 'age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise',
}
r = requests.post(url,headers = headers,params = params, data=image_data) # Submit to Azure Cognitive Services Face API
age = r.json()[0]["faceAttributes"]["age"] # Return the age of the first face
gender = r.json()[0]["faceAttributes"]["gender"] # Return the gender of the first face
haircolor = r.json()[0]["faceAttributes"]["hair"]["hairColor"][0]["color"] # Return the hair color of the first face
emotions = r.json()[0]["faceAttributes"]["emotion"] # Return the emotions of the first face
return render_template("FaceAnalysis.html",age=age,gender=gender,haircolor=haircolor,emotions=emotions,number=randomnumber) # Pass the results above to FaceAnalysis.html which presents the output and the pic taken to the user
if __name__ == "__main__":
app.run(port=80,host='0.0.0.0')