Face Detection and Analysis using Azure Cognitive Services and a Raspberry Pi

I recently blogged about creating a Mood Detector using Lobe, I wondered what other options were available for face analysis which led to me embarking on a journey of ramping up on Azure Cognitive Services, more specifically the Face Service, which has some really cool capabilities.

I used my trusty Raspberry Pi (with attached camera) and developed a Flask application using Python, however rather than using the Face client library for Python, I opted to go for the REST API so that the code is a little more portable.

I created a Flask app that does the following:

  • Takes a picture
  • Submits this picture to the REST API endpoint for the Face Service
  • Returns the detected age, gender, hair colour and a list of potential emotions (with a score for each) – the Face Service can detect/analyse multiple faces, so I hardcoded it to return the results from the first face detected

An example of the app in action can be found below – the screenshot below is of the results page, as you can there is a reason that I’m not a front-end dev! I was most impressed by the fact that the Face Service thinks that I’m 8 years younger than I actually am 😊. It also correctly detected my emotion (smiling).

The code for this app can be found at – Blog-Samples/Face Analysis at main · brendankarl/Blog-Samples (github.com).

To run this you’ll need:

  • A Raspberry Pi with attached camera (I used a Pi 4, but older models should work too)
  • An Azure subscription with an Azure Cognitive Services resource provisioned
  • Simply copy the code from the GitHub repo, update the url and key variable and execute the command below in a terminal (from the directory where the code is)
sudo python3 FaceAnalysis.py

Below is the FaceAnalysis.py code for reference.

from flask import Flask, render_template
from picamera import PiCamera
from time import sleep
import os
import random
import requests
import json
app = Flask(__name__)
@app.route('/')
def button():
    return render_template("button.html") # Presents a HTML page with a button to take a picture
@app.route('/takepic')
def takepic():
    currentdir = os.getcwd()
    randomnumber = random.randint(1,100) # A random number is created for a query string used when presenting the picture taken, this is to avoid web browser caching of the image.
    camera = PiCamera()
    camera.start_preview()
    sleep(2)
    camera.capture(str(currentdir) + "/static/image.jpg") # Take a pic and store in the static directory used by Flask
    camera.close()
    url = "https://uksouth.api.cognitive.microsoft.com/face/v1.0/detect" # Replace with the Azure Cognitive Services endpoint for the Face API (depends on the region deployed to)
    key = "" # Azure Cogntivie Services key
    image_path = str(currentdir) + "/static/image.jpg"
    image_data = open(image_path, "rb").read()
    headers = {"Ocp-Apim-Subscription-Key" : key,'Content-Type': 'application/octet-stream'}
    params = {
    'returnFaceId': 'false',
    'returnFaceLandmarks': 'false',
    'returnFaceAttributes': 'age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise',
    }
    r = requests.post(url,headers = headers,params = params, data=image_data) # Submit to Azure Cognitive Services Face API
    age = r.json()[0]["faceAttributes"]["age"] # Return the age of the first face
    gender = r.json()[0]["faceAttributes"]["gender"] # Return the gender of the first face
    haircolor = r.json()[0]["faceAttributes"]["hair"]["hairColor"][0]["color"] # Return the hair color of the first face
    emotions = r.json()[0]["faceAttributes"]["emotion"] # Return the emotions of the first face
    return render_template("FaceAnalysis.html",age=age,gender=gender,haircolor=haircolor,emotions=emotions,number=randomnumber) # Pass the results above to FaceAnalysis.html which presents the output and the pic taken to the user
if __name__ == "__main__":
    app.run(port=80,host='0.0.0.0')

Creating a Mood Detector using Lobe and a Raspberry Pi

I’ve recently been experimenting with Lobe, this is a remarkable app that democratizes AI by providing the ability to build a Machine Learning (ML) model in less than ten minutes, the beauty is that this does not require any ML or coding experience. You can find out more about it at Lobe Tour | Machine Learning Made Easy.

I’ve always been really interested in self-improvement and understanding more about myself, one aspect of this that really intrigues me is my mood throughout the workday, this can go from elation to despair, and I’ve never quite figured out what the key drivers are for this (although I do have some ideas).

My love of overcomplicating the simple, led to me developing an application to record my mood throughout the day with my Raspberry Pi 4 and it’s camera. The plan would be for Lobe to analyse my mood using pictures captured by the Pi camera.

The Pi and its camera were already sat on my desk staring at me, so perfectly placed.

I wanted to be able to take a picture of myself using the Pi, have Lobe recognise my mood and log the mood along with date/time, then later I could analyse this data for specific patterns, correlating with my work calendar for additional insight. I wanted to know – is it just me having a case of the Mondays or are there specific times of the day, activities or projects that drive my mood?

To get started I headed over to www.lobe.ai, downloaded the Windows app (it’s also available for Mac) and used this to take some pictures of me in two moods (positive = thumb up / negative = thumb down). I took the pictures using the Webcam attached to my Windows 10 device, I then tagged the images and let Lobe works its magic on training an ML model.

I then selected Use and was able to evaluate the model real-time (with a surprising level of accuracy!). Once I was happy with everything, I exported the model as TensorFlow Lite – which is the best option for a Raspberry Pi.

I then copied the TensorFlow Lite model (which is basically a folder with a bunch of files within) to my Raspberry Pi. The next step was to install Lobe for Python on the Pi by running the following.

wget https://raw.githubusercontent.com/lobe/lobe-python/master/scripts/lobe-rpi-install.sh
sudo ./lobe-rpi-install.sh

Now everything was up and running I used the sample Python script available here to test Lobe with the model that I had just created using some sample images I had, this worked so I moved on to creating a Python based Web application using the Flask Framework.

Here is the finished app in all it’s glory! All I have to do is launch the site click Capture Mood, the Pi camera then takes a pic, runs this through the ML model created using Lobe and confirms the mood detected (along with a button to capture the mood again), in the background it also writes the detected mood, date and time to a CSV file for later analysis.

Below is an example of the CSV output – that ten minutes sure was a real rollercoaster of emotions 😂.

This is obviously quite rudimentary; I need to extend the model to detect additional moods, however it was a useful exercise in getting to grips with Lobe and Flask.

The full solution can be found here (minus the ML model) – to save you a click, below is the Python code (MoodDetector.py):

from time import sleep
from picamera import PiCamera
from lobe import ImageModel
from flask import Flask, redirect, url_for, request, render_template
import csv
import datetime
app = Flask(__name__)
@app.route('/')
def button():
    return render_template('button.html') # Display the capture mood button, when clicked redirect to /capturemood
@app.route('/capturemood') # Take a pic, analyses and writes output to HTML and CSV
def capturemood():
    camera = PiCamera()
    camera.start_preview()
    sleep(2)
    camera.capture('mood.jpg') # Take picture using Raspberry Pi camera
    camera.close()
    model = ImageModel.load("Mood TFLite") # Load the ML model created using Lobe
    result = model.predict_from_file("mood.jpg") # Predict the mood of the mood.jpg pic just taken 
    now = datetime.datetime.now()
    date = now.strftime("%x")
    time = now.strftime("%X")
    moodCSV = open("Mood.csv", "a")
    moodCSVWriter = csv.writer(moodCSV) 
    moodCSVWriter.writerow([date,time,str(result.prediction)]) # Write the date, time and mood prediction to the Mood.csv file
    moodCSV.close()
    #Vary the HTML output depending on whether the prediction is positive or negative.
    if str(result.prediction) == "Negative": 
        return """<div class="buttons"><p>"Mood is Negative"</p>
        <a href='/capturemood'><input type='button' value='Capture Mood'></a></div>"""
    elif str(result.prediction) == "Positive":
        return """<div class="buttons"><p>"Mood is Positive"</p>
        <a href='/capturemood'><input type='button' value='Capture Mood'></a></div>"""
if __name__ == "__main__":
    app.run(port=80,host='0.0.0.0')

…and here is the supporting render template that I created (button.html).

<html>
<body>
<div style='text-align:center'>
    <a href='/capturemood'><input type='button' value='Capture Mood' align='Center'></a>
</div>
</body>
</html>

Having Fun with Azure Cognitive Services

It’s been a while since I’ve looked at Azure Cognitive Services, whilst racking my brains for my next experiment I wondered how easy it would be for me to use the Computer Vision API to analyze the the output of a display attached to my Raspberry Pi, more on that here.

You may be thinking…..why do this? You already know what is being written to the display so why over-complicate things? I was simply looking for a semi-practical use-case that would help me to learn Azure Cognitive Services and practice Python. I’m more of a hands-on learner and reading endless documentation and running samples isn’t the best way for me to learn, I need to play, experiment, and fail (a lot!) to really get to grips with things.

I have a camera attached to my Raspberry Pi so my idea was to:

  • Take a picture of the display using the camera attached to the Raspberry Pi.
  • Use the Computer Vision API in Azure Cognitive Services to analyse the picture taken of the display.
  • Return the analyzed text from the picture.

My first step was to position the camera and take a picture, this is straightforward using the picamera package. The example below takes a picture named “PicOfDisplay” and saves this to the desktop of the Pi.

import os
from picamera import PiCamera
os.chdir("/home/pi/Desktop")
camera = PiCamera()
camera.capture('PicOfDisplay.jpg')

Here is an example picture of the display.

I then adapted the sample code found here which uses Python to call the REST API for the Computer Vision Service.

My final solution can be found below, this does the following:

  • Submits the picture taken with the Raspberry Pi camera (PicofDisplay.jpg) to the Computer Vision Service (using Python and the REST API).
  • Poll the service until the analysis has been completed (this process runs asynchronously) and stores the results of the analysis in a list named “lines”.
  • Outputs the text that the Computer Vision Service has identified from the picture that relates to the display, as you can see below the raw output of the “lines” list includes text found on the board of the Pi, which I’m not so interested in 😀. If I ever change the position of the camera or the Pi, I’d need to tweak this as the order of identified text could vary in the “lines” list

Actual script output:

The full script in all of it’s glory.

import requests
import time
from io import BytesIO
import json
#Extract text from image
url = "https://uksouth.api.cognitive.microsoft.com/vision/v3.0/read/analyze" # Replace with the appropriate endpoint
key = "Azure Cognitive Services Key" # Enter the key
image_path = "/home/pi/Desktop/PicOfDisplay.jpg"
image_data = open(image_path, "rb").read()
headers = {"Ocp-Apim-Subscription-Key" : key,'Content-Type': 'application/octet-stream'}
r = requests.post(url,headers = headers, data=image_data)
operation_url = r.headers["Operation-Location"]
# The recognized text isn't immediately available, so poll to wait for completion.
analysis = {}
poll = True
while (poll):
    response_final = requests.get(r.headers["Operation-Location"], headers=headers)
    analysis = response_final.json()
    
    print(json.dumps(analysis, indent=4))
    time.sleep(1)
    if ("analyzeResult" in analysis):
        poll = False
    if ("status" in analysis and analysis['status'] == 'failed'):
        poll = False
lines = []
for line in analysis["analyzeResult"]["readResults"][0]["lines"]:
    lines.append(line["text"])
print(lines[0] + " " + lines[1] + " " + lines[2]) # The data that I'm interested in (from the display) is found within the first three entries of the list.

I could of course taken this a step further and incorporated logic to take a pic and then submit to the Computer Vision Service automatically and run this in a continuous loop, however as this was more of a proof of concept I didn’t see the need.

Getting to Grips with the Adafruit Mini PiTFT

I bought an Adafruit Mini PiTFT a year or so ago, I’d played around with the sample Python scripts provided by Adafruit but had never really found a practical use for it…until now!

I recently created a script for my Raspberry Pi that checks the latency and speed of my Internet connection every 5 minutes and writes this to a CSV file – which at some point I’ll actually analyse and produce some pretty charts! I wrote about it here. This got me thinking – could I write the output of these tests to my shiny new-ish display? My idea being I can quickly take glance at the display to see the current-ish performance of my Internet connection whenever I have the urge.

I attached the display to my Raspberry Pi, cracked open Visual Studio Code and made a start on this.

My plan was to display the last line of the CSV output file that my Internet speed test script creates (which will contain the outcome of the previous test run), draw this to the display and then refresh every 5 minutes.

I knew that I could use the Linux tail command to read the last line of the CSV file, such as:

tail -n 1 Output.csv

I wasn’t sure however, how to call this from a Python script and store the output as a variable, it turns out that this can be performed using the following commands, which I found after doing some Bing-Fu here (Solution 4).

from subprocess import PIPE, run
output = run("tail -n 1 Output.csv", stdout=PIPE, stderr=PIPE, universal_newlines=True, shell=True)

Inspecting the output variable yielded this:

CompletedProcess(args=’tail -n 1 Output.csv’, returncode=0, stdout=’01/07/2021 17:12:47,3.441,371.70,34.53\n’, stderr=”)

The next thing I needed to do was to extract the stdout and split this into separate variables that I can then draw to the display, I used a mix of the split and replace Python functions to do this (it’s not pretty but it works).

For reference in order the stdout contains the execution time, ping time, download, and then upload speed.

Ping = "Ping:" + str(output).split(",")[3]
Download = "DL:" + str(output).split(",")[4]
Upload = "UL:" + str(output).split(",")[5].replace("\\n'", "")

I then checked the newly-created variables to make sure they displayed correctly.

The final thing for me to do was to adapt the Python Stats Example and incorporate what I’d done above. This uses a while loop that runs continuously (with a 5-minute pause between executions), my modified while loop can be found below, this does the following:

  • Clears the current image on the display
  • Runs the tail command on the Output.csv file
  • Creates variables from the output of the tail command (Ping, Download and Upload)
  • Draws these variables to the display
  • Sleeps for 300 seconds
while True:
    # Draw a black filled box to clear the image.
    draw.rectangle((0, 0, width, height), outline=0, fill=0)
    # Obtain output from last line of CSV file and create variables for each value
    output = run("tail -n 1 Output.csv", stdout=PIPE, stderr=PIPE, universal_newlines=True, shell=True)
    Ping = "Ping:" + str(output).split(",")[3]
    Download = "DL:" + str(output).split(",")[4]
    Upload = "UL:" + str(output).split(",")[5].replace("\\n'", "")
    # Draw the variables defined above to the screen.
    y = 0
    draw.text((x, y), Ping, font=font, fill="#FFFFFF")
    y += font.getsize(Ping)[1]
    draw.text((x, y), Download, font=font, fill="#FFFF00")
    y += font.getsize(Download)[1]
    draw.text((x, y), Upload, font=font, fill="#00FF00")
    y += font.getsize(Upload)[1]
    # Display the image featuring the text.
    disp.image(image, rotation)
    time.sleep(300)

I ran the script and checked the display on my Raspberry Pi – success!

The full script in all its glory can be found below:

import time
import digitalio
import board
import adafruit_rgb_display.st7789 as st7789
import os
from subprocess import PIPE, run
from PIL import Image, ImageDraw, ImageFont
# Configuration for CS and DC pins (these are FeatherWing defaults on M0/M4):
cs_pin = digitalio.DigitalInOut(board.CE0)
dc_pin = digitalio.DigitalInOut(board.D25)
reset_pin = None
# Config for display baudrate (default max is 24mhz):
BAUDRATE = 64000000
# Setup SPI bus using hardware SPI:
spi = board.SPI()
# Create the ST7789 display:
disp = st7789.ST7789(
    spi,
    cs=cs_pin,
    dc=dc_pin,
    rst=reset_pin,
    baudrate=BAUDRATE,
    width=240,
    height=240,
    x_offset=0,
    y_offset=80,
)
# Create blank image for drawing.
height = disp.width 
width = disp.height
image = Image.new("RGB", (width, height))
rotation = 180
# Get drawing object to draw on image.
draw = ImageDraw.Draw(image)
# Draw a black filled box to clear the image.
draw.rectangle((0, 0, width, height), outline=0, fill=(0, 0, 0))
disp.image(image, rotation)
x = 0
# Load a TTF font.
font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 22)
# Turn on the backlight
backlight = digitalio.DigitalInOut(board.D22)
backlight.switch_to_output()
backlight.value = True
while True:
    # Draw a black filled box to clear the image.
    draw.rectangle((0, 0, width, height), outline=0, fill=0)
    # Obtain output from last line of CSV file and create variables for each value
    output = run("tail -n 1 Output.csv", stdout=PIPE, stderr=PIPE, universal_newlines=True, shell=True)
    Ping = "Ping:" + str(output).split(",")[3]
    Download = "DL:" + str(output).split(",")[4]
    Upload = "UL:" + str(output).split(",")[5].replace("\\n'", "")
    # Draw the variables defined above to the screen.
    y = 0
    draw.text((x, y), Ping, font=font, fill="#FFFFFF")
    y += font.getsize(Ping)[1]
    draw.text((x, y), Download, font=font, fill="#FFFF00")
    y += font.getsize(Download)[1]
    draw.text((x, y), Upload, font=font, fill="#00FF00")
 
    # Display the image featuring the text.
    disp.image(image, rotation)
    time.sleep(300)

Using PowerShell to Write Data to Azure Table Storage

As part of my continued quest to over-engineer and complicate things, I decided to update a script I’d recently written that performs a regular speed test of my Internet connection to store the results in Azure Table Storage rather than a local CSV file.

This was the first time that I’d used Azure Table Storage and I was pleasantly surprised how easy it was to integrate into my script, after creating the Storage Account via the Azure Portal and creating a table (named “perf”), it was simply a case of doing the following.

Step 1 – Install the PowerShell Module for Azure Table Storage

Install-Module Az
Install-Module AzTable

Step 2 – Connect to the Azure Storage Table

Before attempting to connect to the table, I needed to obtain the Access Key. To do this via the Azure Portal, I selected the Storage Account and then Access Keys, and then hit Show Keys and took a copy of the key.

I then needed to connect to the table using PowerShell (using the key obtained above), to do this I ran the following:

$StorageAccountName = "Storage Account Name" # Enter the name of the storage account e.g. "BrendgStorage"
$Key = "Access Key" # Use the Access Key obtained via the Azure Portal
$StorageContext = New-AzStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $Key # Connect to the Storage Account
$Table = (Get-AzStorageTable -Context $StorageContext | where {$_.name -eq "perf"}).CloudTable # Connect to the Perf table

Once this completed (without errors), I verified the $Table variable. This confirmed that I had successfully connected to the “perf” table.

Step 3 – Update Speed Test Script

I then needed to incorporate the above into the Internet test script and add logic to add the output of the Speedtest CLI tool to the Azure Table (Perf) rather than a CSV file.

The updated script can be found below, in lines 1-4 it connects to the table named perf, it then runs a continual loop that runs speedtest-cli and adds the output to the perf table using Add-AzTableRow (which it repeats every 5 minutes). As I like to live dangerously, there’s no logic to handle failures 😀.

As this is a super-simple table I’m using a single partition key (“1”) and using Ticks as the row key, I also manually specify the data type for each of the properties as the default behaviour of Add-AzTableRow is to add as a String. I used Double as the data type for the Ping, Download and Upload properties to enable me to query the data – for example to show all entries where Ping was greater than 50 (ms).

I do some string manipulation to pull out the values from the output of speedtest-cli ($Speedtest) as this simply returns a single string containing all the test results (Ping, Download and Upload).

$StorageAccountName = "Storage Account Name"
$Key = "Key"
$StorageContext = New-AzStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $Key
$Table = (Get-AzStorageTable -Context $StorageContext | where {$_.name -eq "perf"}).CloudTable

$i = 0
while ($i -eq 0)
{
    $PartitionKey = "1"
    $Time = Get-Date
    $SpeedTest = /usr/local/bin/speedtest-cli --simple
    Add-AzTableRow -table $Table -PartitionKey $PartitionKey -RowKey (Get-Date).Ticks -property @{"DateTime"=$Time;"Ping@odata.type"="Edm.Double";"Ping"=$SpeedTest[0].split(" ")[1];"Download@odata.type"="Edm.Double";"Download"=$SpeedTest[1].split(" ")[1];"Upload@odata.type"="Edm.Double";"Upload"=$SpeedTest[2].split(" ")[1]}
    Start-Sleep -Seconds 300
}

The script has been running for a few days now, I used Storage Explorer within the Azure Portal to view the table and confirm that data is being successfully collected. This made me realise that the DateTime property I add is redundant as the Timestamp property stores this automatically on insert.

Understanding the Table service data model (REST API) – Azure Storage | Microsoft Docs was a useful reference document as I got to grips with Azure Table Storage.

The Joys of Unreliable Internet

I’ve had a strange issue with my Internet for the last few months, it’s rock solid during the day and I have no issues at all, however from around 8pm onwards, it becomes unreliable – ping times go through the roof or I lose connectivity intermittently. This used to occur one night a week or so but for the past couple of weeks it has been happening 2-3 times a week which is seriously affecting my Netflix consumption 😀.

I have a FTTP connection and there doesn’t appear to be a fault with the fibre connection into my property as the fibre connection light on the ONT is green when the Internet grinds to a halt. I reported this to my ISP who requested I contact them when the issue is active so that they can perform some diagnostics.

I decided to collect some data on the issue to help me identify any patterns with this and also as evidence for my ISP. As I mentioned in a previous post I have a lot of spare Raspberry Pi’s so decided to put one of them to some good use!

I connected the Pi directly via Ethernet to my router and wrote a quick and dirty PowerShell script that uses the Speedtest CLI Python script written by Matt Martz to perform a speed test of my connection every 5 minutes. Yes, you read that correctly – you can run PowerShell on the Raspberry Pi, here is a guide on how to set this up. I used PowerShell to call the Python script for no other reason than I’d never done it before so it seemed like a good experiment.

Below is the script that I ran, this uses the Speedtest CLI to perform a test every 5 minutes and writes the output to a CSV file.

$i = 0
while ($i -eq 0)
{
    $Time = Get-Date
    $SpeedTest = /usr/local/bin/speedtest-cli --simple
    $Time.ToString() + "," + $SpeedTest[0] + "," + $SpeedTest[1] + "," + $SpeedTest[2]  >> /home/pi/Share/SpeedTest.csv 
    Start-Sleep -Seconds 300
}

Here is what the output looks like in Excel, I’m going to collect data for a few days before I crack open Power BI and do some analysis of the data.

More Raspberry Pi and Container Goodness!

Next up in my quest to learn more about running containers on a Raspberry Pi, was to test a container that I created a while ago when I was ramping up on Flask and Python. I created a basic container that generates a list of 8 random exercises (from a pool of 26), I was in the process of putting together a new workout regime at the time, so this seemed like a perfect way to build something that had practical use in my life.

The first thing I needed to do was download the repo to my Raspberry Pi, to do this I ran the following command (from within the directory I wanted to temporarily store the downloaded repo).

git clone https://github.com/brendankarl/Containers

Once I had the repo downloaded locally on my Pi, I needed to change into the directory that housed the specific container I was interested in (WorkoutGenerator) as the repo has others.

cd Containers/WorkoutGenerator

I then needed to create the image using Docker Build, the command below references the Dockerfile within the repo and names the image “workoutgenerator”.

docker build -t workoutgenerator ./

Once this command complete, I could then run the image, exposing port 80 on the container to the Raspberry Pi so that I could access it from within my network. If you are interested in exposing things externally ngrok is a great free tool I’d recommend taking a look at.

docker run -d -p 80:80 workoutgenerator

A quick peek into Docker using the Visual Studio Code extension and I could see the container running and the images added to support this.

Finally, I launched a browser and hit the IP address of the Raspberry Pi to check that everything was running correctly – voila it was working!

I refreshed the page a couple of times to verify that the exercises were updated.

That’s enough containers for me today…..I need a workout 😂.

Running Docker on a Raspberry Pi

I’ve been playing around with Docker and containers for the last year or so, primarily by running Docker Desktop on my Windows 10 device and experimenting with Azure Container Instances. I even shared one of the containers that I created on GitHub – https://github.com/brendankarl/Containers, a super-advanced Workout Generator app 😀.

As I have more Raspberry Pi’s than I care to admit, I’m always looking for new ways to use them and reduce the guilt I feel when I see them abandoned on my desk.

I’d read that you could run Docker on a Raspberry Pi, however I’d never got round to playing around with this…and to honest I expected it to be a bit of a palaver.

I was pleasantly surprised how easy it was to get Docker installed and my first container running on a Pi – it took a mere six commands!

sudo apt-get update && sudo apt-get upgrade
curl -sSL https://get.docker.com | sh
sudo usermod -aG docker ${USER}
sudo pip3 install docker-compose
sudo systemctl enable docker
sudo docker run -d -p 80:80 hypriot/rpi-busybox-httpd

This installs Docker and Docker Compose, enables Docker to startup automatically on boot and runs the https://github.com/hypriot/rpi-busybox-httpd image, which is a straightforward way to verify that Docker is working correctly (by running a lightweight web server). Once these commands finished executing, I launched a browser and connected to the IP of my Pi and was greeted with this – success!

As a side note Visual Studio Code with the Remote Development and Docker extensions is a great way to do remote development and manage Docker on a Raspberry Pi from Windows or Mac.