Running an Amstrad CPC 6128 on Microsoft Edge on Linux on Windows 11!

I’ve recently upgraded my Surface Book 2 to Windows 11, one of the first things I took for a test spin was Windows Subsystem for Linux GUI (WSLg) which provides the ability to run Linux GUI apps directly on Windows!

It took me around 5 minutes to get WSLg up and running (including a reboot) using this step-by-step guide, I opted to use the Ubuntu Linux distro (which is the default). One of the first things that I did was to install Microsoft Edge for Linux, instructions on how to do this can be found here.

One of the cool things about WSLg is that Linux apps that have been installed appear in the Start Menu!

This got me thinking…..could I spin up an emulator for the greatest computer of all time – the Amstrad CPC 6128 (my very first computer) within Edge? It turns out I could using 😀. So here we have an Amstrad CPC 6128 running within Edge on Linux on Windows 11.

Check out my BASIC skills!

If you are interested in finding out how WSLg works under the hood, I’d recommend checking out this blog post.

Raspberry Pi Tip and Tricks

I’ve had a Raspberry Pi since it launched back in 2012, I was that excited when mine arrived that I even Tweeted about it 😀.

Over the years I’ve used them for all kinds of things, ranging from testing my Internet connection, which I blogged about here to playing my favourite video games from the 90s using RetroPie, what better use of a Pi than winding back the years and playing Sonic the Hedgehog and Mario like it’s 1992 again!

I thought I’d share a few of my Tips and Tricks for using a Raspberry Pi.

Running a Headless Raspberry Pi

I run all my Pi’s headless (not attached to a monitor, keyboard, and mouse) I use SSH and VNC to access my various Pi’s over the network which works well. One small annoyance I had was the need to manually configure the Wifi and SSH whenever I setup a new Pi (or re-image an existing Pi – as I tend to break them!), which meant I had to connect the Pi to a monitor and keyboard to perform the initial configuration prior to going headless.

I recently became aware that the Raspberry Pi Imager (a tool that can be used to write OS images to an SD card for the Pi) has a hidden advanced options menu that you can use to configure elements of the OS. All you need to do after launching Raspberry Pi Imager is hit CTRL+SHIFT+X (on Windows) to launch the advanced options menu, whatever you configure here gets applied to Raspberry Pi OS when it’s written to the SD card – neat eh!

In the example below, I did the following:

  • Set the hostname to mypi
  • Enabled SSH and set the password for the default pi user
  • Configured it to connect to a Wifi network (QWERTY in this example)

You can also do other things such as disabling overscan and setting the locale. Once you’ve finished editing the configuration, hit save, then when you write Raspberry Pi OS to the SD card it will pre-configure the OS with the settings specified. This has saved me a ton of time (and fiddling around with cables!). The oly thing I have to do manually now is to configure VNC, although I can do this via SSH using raspi-config.

Exposing a Pi to the Internet

I built a rudimentary surveillance camera for my house using the Pi Camera and this script sample which creates a basic web server and streams footage from the Pi Camera.

I didn’t use this to monitor my house for burglars…’s main purpose was to help me keep an eye on my cat 😸. The one problem was that it was only accessible from within my home network, which wasn’t really that useful when I was out and about. I did some research and came across ngrok, which makes it super simple to expose a Raspberry Pi to the Internet without doing anything too risky such as configuring port forwarding on your router. This enabled me to keep tabs on my cat wherever I was in the world (as long as I had an Internet connection).

ngrok support Mac OS, Windows, Linux and FreeBSD and it’s super simple to setup and free to use (with some restrictions), here is a guide on how to expose a local web server to the Internet – it’s literally a single command!

ngrok http 80

Once this command is run, it will provide the external URLs that the local port (80) has been exposed to (by default it will create HTTP and HTTPS endpoint if the command above is run). It’s then as simple as connecting to one of the public URLs which will then route traffic to the port exposed on the Pi.

Below you can see this in action….I’ve obscured the publicly accessible URLs (“Forwarding”) as these contain my public IP address.

There is also a local web management interface that can be accessed locally from the device which allows you to inspect requests, review the configuration and also metrics.

Obviously, this is a great tool for testing and playing around with, it’s definitely not something I’d use in production 😀.

Using PowerShell on the Pi

Yes, you read that correctly – you can run PowerShell on the Pi! As somebody who comes from a Windows background who loves PowerShell I was over the moon when PowerShell went cross-platform. I couldn’t ever imagine a day that PowerShell would be available on the Pi – kudos to whoever pushed for making it a cross-platform tool.

As much as I like Python, I have far more experience with PowerShell and sometimes it’s simpler to run a PowerShell command using my muscle memory than spending time researching how to do the equivalent using Python.

PowerShell is super-simple to install on Raspberry Pi OS, this guide steps through the process. I also create a symbolic link so that I don’t have to type the full path to the pwsh (PowerShell) binary when using it (this is also covered in the guide).

Once you’ve done that, you are good to go:

As a side note, I can also highly recommend Visual Studio Code I write all my Python and PowerShell scripts on the Pi using this.

Querying the Microsoft Graph with Python

One of my colleagues mentioned to me that data from MyAnalytics (which is feature of Viva Insights within Microsoft 365) is now accessible via the Beta endpoint of the Microsoft Graph. If you aren’t familiar, you can find out more about MyAnalytics here.

I was particularly excited as MyAnalytics has a wealth of Microsoft 365 usage data, which it analyzes to provide users with personalised insights based on their work patterns and behaviours, for example:

Clicking Learn more on each card provides additional guidance:

I was really interested to examine the data returned by the Beta Graph endpoint for MyAnalytics. Looking at the documentation, it provides two key pieces of functionality:

Activity statistics returns statistics on the following data points for the previous week (Monday to Sunday) for a user. It’s currently not possible to specify a particular week to query, it will simply return data from the previous week.

  • Calls (Teams)
  • Chats (Teams)
  • Emails (Exchange)
  • Meetings (Exchange)
  • Focus – this is explained here

If I take emails as an example, this returns the following properties:

…and here are the returned properties for meetings:

Productivity and self-improvement are two areas of immense interest to me, using the MyAnalytics data returned from the Graph I could envisage creating some custom reports to track my work patterns over time and then act on this – for example, the data could highlight that I’ve spent more time working outside of working hours recently or that I’m starting to attend more recurring meetings.

As a side note: Outlook settings are used to determine a user’s working hours.

The next step for me was to create a Python Web app (using Flask) to retrieve a subset of this information from the Graph (I always love to overcomplicate things!).

I took the Flask-OAuthlib sample from and tweaked this to my needs, my updated script can be found below and on GitHub.

This script could be tweaked to perform other Graph queries if needed

import uuid
import json
import flask
from flask_oauthlib.client import OAuth


REDIRECT_URI = 'http://localhost:5000/login/authorized'
AUTH_ENDPOINT = '/oauth2/v2.0/authorize'
TOKEN_ENDPOINT = '/oauth2/v2.0/token'
API_VERSION = 'beta'
SCOPES = ['Analytics.Read']

APP = flask.Flask(__name__)
APP.secret_key = 'development'
MSGRAPH = OAUTH.remote_app(
    'microsoft', consumer_key=CLIENT_ID, consumer_secret=CLIENT_SECRET,
    request_token_params={'scope': SCOPES},
    base_url=RESOURCE + API_VERSION + '/',
    request_token_url=None, access_token_method='POST',
    access_token_url=AUTHORITY_URL + TOKEN_ENDPOINT,
    authorize_url=AUTHORITY_URL + AUTH_ENDPOINT)

def login():
    """Prompt user to authenticate."""
    flask.session['state'] = str(uuid.uuid4())
    return MSGRAPH.authorize(callback=REDIRECT_URI, state=flask.session['state'])

def authorized():
    """Handler for the application's Redirect Uri."""
    if str(flask.session['state']) != str(flask.request.args['state']):
        raise Exception('state returned to redirect URL does not match!')
    response = MSGRAPH.authorized_response()
    flask.session['access_token'] = response['access_token']
    return flask.redirect('/graphcall')

def graphcall():
    """Confirm user authentication by calling Graph and displaying some data."""
    endpoint = 'me/analytics/activityStatistics'
    headers = {'SdkVersion': 'sample-python-flask',
               'x-client-SKU': 'sample-python-flask',
               'client-request-id': str(uuid.uuid4()),
               'return-client-request-id': 'true'}
    graphdata = MSGRAPH.get(endpoint, headers=headers).data
    data = str(graphdata).replace("'",'"')
    datadict = json.loads(data)
    summary = []
    i = 0
    while i < 5:
        if datadict["value"][i]["activity"] == "Focus":
            i += 1
            summary.append("Activity Type: " + datadict["value"][i]["activity"] + " / Date: " + datadict["value"][i]["startDate"] + " / After Hours " + datadict["value"][i]["afterHours"])
            i += 1
    return str(summary)  

def get_token():
    """Called by flask_oauthlib.client to retrieve current access token."""
    return (flask.session.get('access_token'), '')

if __name__ == '__main__':

This script (Flask Web app) does the following:

  • Prompts the user to authenticate to a M365 tenant (and requests access to the ‘Analytics.Read’ and ‘User.Read’ scopes in the Graph)
  • Queries the me/analytics/activityStatistics endpoint
  • Returns the following information for each activity type for the first day in the reporting period (excluding Focus)
    • Date (“startDate”)
    • Activity Type (“activity”)
    • Time spent after hours on the activity (afterHours)

If you take a closer look at the script, you’ll see it takes the raw JSON output from the Graph, converts this to a Python dictionary and then iterates through the first day of the weeks data for each activity type (excluding Focus) and outputs this as a string – it’s certainly not pretty, but this is more of a proof of concept to get me started 😀.

Before running this script, you’ll need to do a few things:

  • Install the pre-requisites (pip install -r requirements.txt)
  • Register an application in Azure AD, here is a walkthrough of how to do this
  • In addition to the above, add the Analytics.Read permission (example below) – this is required to get access to the MyAnalytics data
  • Update the CLIENT_ID and CLIENT_SECRET variables (using the values obtained when registering the app in Azure AD)
  • Run the script using “python”
  • Launch a browser and connect to http://localhost:5000

You should then (hopefully!) see the following:

A sign in page:

Once authenticated, you should see the following screen – which is the requesting specific permission to the users data.

Once you’ve clicked Accept, the delightful screen below should be displayed which includes the raw output. The test tenant I used to create the script has no usage hence the important data (After Hours) reports zero, in a real-world scenario this would be a cumulative value in seconds spent on each activity after hours.

I’ll likely write more about this as my experimentation continues…

Tinkering with Azure Anomaly Detector

I’ve fancied having a play around with Azure Anomaly Detector (which is part of Azure Cognitive Services) for some time, I’d never really had a good use-case or excuse to do this…..until now!

I recently created a script for my Raspberry Pi that regularly checks the latency and speed of my Internet connection and writes this to a CSV file. I wrote about this here, my primary reason for doing this was to help me diagnose some random connectivity issues that I was experiencing – although ironically since creating this script my Internet connection has been super-reliable!

It got me thinking that I could take the data collected by this script and run it through Anomaly Detector to automate analysis and identify specific times of the day that my Internet connection speed deviated from the norm, which sounded far more appealing than cracking open Excel and doing this manually 😀.

I put together the PowerShell script below (which is also available on GitHub), this takes the CSV file created by my Internet speed test script, extracts the relevant data, formats this into JSON and submits to Azure Anomaly Detector for analysis, I opted to perform Batch rather than Streaming detection as I didn’t need to analyse the data real-time (now that would be overkill!), the differences between Batch and Streaming detection are explained here. I’m also using the Univariate API as this doesn’t require any experience with ML.

I opted to call the REST endpoint directly using this sample as inspiration, the script does the following:

  • Creates a JSON representation of the data from an input CSV file – “SpeedTestAnomaly.csv” in the format required by Anomaly Detector, an example JSON for reference can be found here. I’ve also uploaded a sample input file to GitHub, I’m only using two values from this input file – the date/time and download speed.
  • Submits this to Anomaly Detector – I’m using the maxAnomalyRatio and sensitivity settings from the sample (0.25 and 95 respectively). I used hourly for granularity as I only test my Internet connection once per hour.
  • Returns the expected and actual results for each test and indicates if the results were flagged as an anomaly (Red = Anomaly, Green = OK)

If you do want to re-use this script, you’ll need to update the $AnomalyURI and $APIKey variables.

$JSON = @"
    "series": [
   "maxAnomalyRatio": 0.25,
   "sensitivity": 95,
   "granularity": "hourly"
$NonJSON = $JSON | ConvertFrom-Json

$Output = Get-Content ./SpeedTestAnomaly.csv
Foreach ($Line in $Output)
  $DL = $Line.split(",")[2]  
  $Date = $Line.split(",")[0]
  $Add = New-Object -TypeName psobject -Property @{timestamp = $Date;value = $DL}
  $NonJSON.series += $Add

$JSON = $NonJSON | ConvertTo-Json

$AnomalyURI = ""
$APIKey = "KEY"

$Result = Invoke-RestMethod -Method Post -Uri $AnomalyURI -Header @{"Ocp-Apim-Subscription-Key" = $APIKey} -Body $JSON -ContentType "application/json" -ErrorAction Stop

$i = 0
Foreach ($Anomaly in $Result.isAnomaly)
  if ($Anomaly -eq "True") 
    Write-Host "Expected Value: " $Result.expectedValues[$i] "Actual Value: " $NonJSON.series[$i] -ForegroundColor Red
    Write-Host "Expected Value: " $Result.expectedValues[$i] "Actual Value: " $NonJSON.series[$i] -ForegroundColor Green
  $i ++

Below is an extract from the input file (SpeedTestAnomaly.csv), I’m only using Column A (date/time) and Column C (download speed – mbps)

Below is the output of the script, this details the expected and actual values for each hourly test and highlights those tests that have a result that has been identified as an anomaly (in red), you can see there are three examples where anomalies have been detected in my Internet connection speed over the course of a couple of days.

Face Detection and Analysis using Azure Cognitive Services and a Raspberry Pi

I recently blogged about creating a Mood Detector using Lobe, I wondered what other options were available for face analysis which led to me embarking on a journey of ramping up on Azure Cognitive Services, more specifically the Face Service, which has some really cool capabilities.

I used my trusty Raspberry Pi (with attached camera) and developed a Flask application using Python, however rather than using the Face client library for Python, I opted to go for the REST API so that the code is a little more portable.

I created a Flask app that does the following:

  • Takes a picture
  • Submits this picture to the REST API endpoint for the Face Service
  • Returns the detected age, gender, hair colour and a list of potential emotions (with a score for each) – the Face Service can detect/analyse multiple faces, so I hardcoded it to return the results from the first face detected

An example of the app in action can be found below – the screenshot below is of the results page, as you can there is a reason that I’m not a front-end dev! I was most impressed by the fact that the Face Service thinks that I’m 8 years younger than I actually am 😊. It also correctly detected my emotion (smiling).

The code for this app can be found at – Blog-Samples/Face Analysis at main · brendankarl/Blog-Samples (

To run this you’ll need:

  • A Raspberry Pi with attached camera (I used a Pi 4, but older models should work too)
  • An Azure subscription with an Azure Cognitive Services resource provisioned
  • Simply copy the code from the GitHub repo, update the url and key variable and execute the command below in a terminal (from the directory where the code is)
sudo python3

Below is the code for reference.

from flask import Flask, render_template
from picamera import PiCamera
from time import sleep
import os
import random
import requests
import json
app = Flask(__name__)
def button():
    return render_template("button.html") # Presents a HTML page with a button to take a picture
def takepic():
    currentdir = os.getcwd()
    randomnumber = random.randint(1,100) # A random number is created for a query string used when presenting the picture taken, this is to avoid web browser caching of the image.
    camera = PiCamera()
    camera.capture(str(currentdir) + "/static/image.jpg") # Take a pic and store in the static directory used by Flask
    url = "" # Replace with the Azure Cognitive Services endpoint for the Face API (depends on the region deployed to)
    key = "" # Azure Cogntivie Services key
    image_path = str(currentdir) + "/static/image.jpg"
    image_data = open(image_path, "rb").read()
    headers = {"Ocp-Apim-Subscription-Key" : key,'Content-Type': 'application/octet-stream'}
    params = {
    'returnFaceId': 'false',
    'returnFaceLandmarks': 'false',
    'returnFaceAttributes': 'age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise',
    r =,headers = headers,params = params, data=image_data) # Submit to Azure Cognitive Services Face API
    age = r.json()[0]["faceAttributes"]["age"] # Return the age of the first face
    gender = r.json()[0]["faceAttributes"]["gender"] # Return the gender of the first face
    haircolor = r.json()[0]["faceAttributes"]["hair"]["hairColor"][0]["color"] # Return the hair color of the first face
    emotions = r.json()[0]["faceAttributes"]["emotion"] # Return the emotions of the first face
    return render_template("FaceAnalysis.html",age=age,gender=gender,haircolor=haircolor,emotions=emotions,number=randomnumber) # Pass the results above to FaceAnalysis.html which presents the output and the pic taken to the user
if __name__ == "__main__":,host='')

Creating a Mood Detector using Lobe and a Raspberry Pi

I’ve recently been experimenting with Lobe, this is a remarkable app that democratizes AI by providing the ability to build a Machine Learning (ML) model in less than ten minutes, the beauty is that this does not require any ML or coding experience. You can find out more about it at Lobe Tour | Machine Learning Made Easy.

I’ve always been really interested in self-improvement and understanding more about myself, one aspect of this that really intrigues me is my mood throughout the workday, this can go from elation to despair, and I’ve never quite figured out what the key drivers are for this (although I do have some ideas).

My love of overcomplicating the simple, led to me developing an application to record my mood throughout the day with my Raspberry Pi 4 and it’s camera. The plan would be for Lobe to analyse my mood using pictures captured by the Pi camera.

The Pi and its camera were already sat on my desk staring at me, so perfectly placed.

I wanted to be able to take a picture of myself using the Pi, have Lobe recognise my mood and log the mood along with date/time, then later I could analyse this data for specific patterns, correlating with my work calendar for additional insight. I wanted to know – is it just me having a case of the Mondays or are there specific times of the day, activities or projects that drive my mood?

To get started I headed over to, downloaded the Windows app (it’s also available for Mac) and used this to take some pictures of me in two moods (positive = thumb up / negative = thumb down). I took the pictures using the Webcam attached to my Windows 10 device, I then tagged the images and let Lobe works its magic on training an ML model.

I then selected Use and was able to evaluate the model real-time (with a surprising level of accuracy!). Once I was happy with everything, I exported the model as TensorFlow Lite – which is the best option for a Raspberry Pi.

I then copied the TensorFlow Lite model (which is basically a folder with a bunch of files within) to my Raspberry Pi. The next step was to install Lobe for Python on the Pi by running the following.

sudo ./

Now everything was up and running I used the sample Python script available here to test Lobe with the model that I had just created using some sample images I had, this worked so I moved on to creating a Python based Web application using the Flask Framework.

Here is the finished app in all it’s glory! All I have to do is launch the site click Capture Mood, the Pi camera then takes a pic, runs this through the ML model created using Lobe and confirms the mood detected (along with a button to capture the mood again), in the background it also writes the detected mood, date and time to a CSV file for later analysis.

Below is an example of the CSV output – that ten minutes sure was a real rollercoaster of emotions 😂.

This is obviously quite rudimentary; I need to extend the model to detect additional moods, however it was a useful exercise in getting to grips with Lobe and Flask.

The full solution can be found here (minus the ML model) – to save you a click, below is the Python code (

from time import sleep
from picamera import PiCamera
from lobe import ImageModel
from flask import Flask, redirect, url_for, request, render_template
import csv
import datetime
app = Flask(__name__)
def button():
    return render_template('button.html') # Display the capture mood button, when clicked redirect to /capturemood
@app.route('/capturemood') # Take a pic, analyses and writes output to HTML and CSV
def capturemood():
    camera = PiCamera()
    camera.capture('mood.jpg') # Take picture using Raspberry Pi camera
    model = ImageModel.load("Mood TFLite") # Load the ML model created using Lobe
    result = model.predict_from_file("mood.jpg") # Predict the mood of the mood.jpg pic just taken 
    now =
    date = now.strftime("%x")
    time = now.strftime("%X")
    moodCSV = open("Mood.csv", "a")
    moodCSVWriter = csv.writer(moodCSV) 
    moodCSVWriter.writerow([date,time,str(result.prediction)]) # Write the date, time and mood prediction to the Mood.csv file
    #Vary the HTML output depending on whether the prediction is positive or negative.
    if str(result.prediction) == "Negative": 
        return """<div class="buttons"><p>"Mood is Negative"</p>
        <a href='/capturemood'><input type='button' value='Capture Mood'></a></div>"""
    elif str(result.prediction) == "Positive":
        return """<div class="buttons"><p>"Mood is Positive"</p>
        <a href='/capturemood'><input type='button' value='Capture Mood'></a></div>"""
if __name__ == "__main__":,host='')

…and here is the supporting render template that I created (button.html).

<div style='text-align:center'>
    <a href='/capturemood'><input type='button' value='Capture Mood' align='Center'></a>

Having Fun with Azure Cognitive Services

It’s been a while since I’ve looked at Azure Cognitive Services, whilst wracking my brains for my next experiment I wondered how easy it would be for me to use the Computer Vision API to analyze the the output of a display attached to my Raspberry Pi, more on that here.

You may be thinking…..why do this? You already know what is being written to the display so why over-complicate things? I was simply looking for a semi-practical use-case that would help me to learn Azure Cognitive Services and practice Python. I’m more of a hands-on learner and reading endless documentation and running samples isn’t the best way for me to learn, I need to play, experiment, and fail (a lot!) to really get to grips with things.

I have a camera attached to my Raspberry Pi so my idea was to:

  • Take a picture of the display using the camera attached to the Raspberry Pi.
  • Use the Computer Vision API in Azure Cognitive Services to analyse the picture taken of the display.
  • Return the analyzed text from the picture.

My first step was to position the camera and take a picture, this is straightforward using the picamera package. The example below takes a picture named “PicOfDisplay” and saves this to the desktop of the Pi.

import os
from picamera import PiCamera


camera = PiCamera()

Here is an example picture of the display.

I then adapted the sample code found here which uses Python to call the REST API for the Computer Vision Service.

My final solution can be found below, this does the following:

  • Submits the picture taken with the Raspberry Pi camera (PicofDisplay.jpg) to the Computer Vision Service (using Python and the REST API).
  • Poll the service until the analysis has been completed (this process runs asynchronously) and stores the results of the analysis in a list named “lines”.
  • Outputs the text that the Computer Vision Service has identified from the picture that relates to the display, as you can see below the raw output of the “lines” list includes text found on the board of the Pi, which I’m not so interested in 😀. If I ever change the position of the camera or the Pi, I’d need to tweak this as the order of identified text could vary in the “lines” list

Actual script output:

The full script in all of it’s glory.

import requests
import time
from io import BytesIO
import json

#Extract text from image
url = "" # Replace with the appropriate endpoint
key = "Azure Cognitive Services Key" # Enter the key
image_path = "/home/pi/Desktop/PicOfDisplay.jpg"
image_data = open(image_path, "rb").read()

headers = {"Ocp-Apim-Subscription-Key" : key,'Content-Type': 'application/octet-stream'}
r =,headers = headers, data=image_data)

operation_url = r.headers["Operation-Location"]

# The recognized text isn't immediately available, so poll to wait for completion.
analysis = {}
poll = True
while (poll):
    response_final = requests.get(r.headers["Operation-Location"], headers=headers)
    analysis = response_final.json()
    print(json.dumps(analysis, indent=4))

    if ("analyzeResult" in analysis):
        poll = False
    if ("status" in analysis and analysis['status'] == 'failed'):
        poll = False

lines = []
for line in analysis["analyzeResult"]["readResults"][0]["lines"]:

print(lines[0] + " " + lines[1] + " " + lines[2]) # The data that I'm interested in (from the display) is found within the first three entries of the list.

I could of course taken this a step further and incorporated logic to take a pic and then submit to the Computer Vision Service automatically and run this in a continuous loop, however as this was more of a proof of concept I didn’t see the need.

Getting to Grips with the Adafruit Mini PiTFT

I bought an Adafruit Mini PiTFT a year or so ago, I’d played around with the sample Python scripts provided by Adafruit but had never really found a practical use for it…until now!

I recently created a script for my Raspberry Pi that checks the latency and speed of my Internet connection every 5 minutes and writes this to a CSV file – which at some point I’ll actually analyse and produce some pretty charts! I wrote about it here. This got me thinking – could I write the output of these tests to my shiny new-ish display? My idea being I can quickly take glance at the display to see the current-ish performance of my Internet connection whenever I have the urge.

I attached the display to my Raspberry Pi, cracked open Visual Studio Code and made a start on this.

My plan was to display the last line of the CSV output file that my Internet speed test script creates (which will contain the outcome of the previous test run), draw this to the display and then refresh every 5 minutes.

I knew that I could use the Linux tail command to read the last line of the CSV file, such as:

tail -n 1 Output.csv

I wasn’t sure however, how to call this from a Python script and store the output as a variable, it turns out that this can be performed using the following commands, which I found after doing some Bing-Fu here (Solution 4).

from subprocess import PIPE, run
output = run("tail -n 1 Output.csv", stdout=PIPE, stderr=PIPE, universal_newlines=True, shell=True)

Inspecting the output variable yielded this:

CompletedProcess(args=’tail -n 1 Output.csv’, returncode=0, stdout=’01/07/2021 17:12:47,3.441,371.70,34.53\n’, stderr=”)

The next thing I needed to do was to extract the stdout and split this into separate variables that I can then draw to the display, I used a mix of the split and replace Python functions to do this (it’s not pretty but it works).

For reference in order the stdout contains the execution time, ping time, download, and then upload speed.

Ping = "Ping:" + str(output).split(",")[3]
Download = "DL:" + str(output).split(",")[4]
Upload = "UL:" + str(output).split(",")[5].replace("\\n'", "")

I then checked the newly-created variables to make sure they displayed correctly.

The final thing for me to do was to adapt the Python Stats Example and incorporate what I’d done above. This uses a while loop that runs continuously (with a 5-minute pause between executions), my modified while loop can be found below, this does the following:

  • Clears the current image on the display
  • Runs the tail command on the Output.csv file
  • Creates variables from the output of the tail command (Ping, Download and Upload)
  • Draws these variables to the display
  • Sleeps for 300 seconds
while True:
    # Draw a black filled box to clear the image.
    draw.rectangle((0, 0, width, height), outline=0, fill=0)
    # Obtain output from last line of CSV file and create variables for each value
    output = run("tail -n 1 Output.csv", stdout=PIPE, stderr=PIPE, universal_newlines=True, shell=True)
    Ping = "Ping:" + str(output).split(",")[3]
    Download = "DL:" + str(output).split(",")[4]
    Upload = "UL:" + str(output).split(",")[5].replace("\\n'", "")

    # Draw the variables defined above to the screen.
    y = 0
    draw.text((x, y), Ping, font=font, fill="#FFFFFF")
    y += font.getsize(Ping)[1]
    draw.text((x, y), Download, font=font, fill="#FFFF00")
    y += font.getsize(Download)[1]
    draw.text((x, y), Upload, font=font, fill="#00FF00")
    y += font.getsize(Upload)[1]

    # Display the image featuring the text.
    disp.image(image, rotation)

I ran the script and checked the display on my Raspberry Pi – success!

The full script in all its glory can be found below:

import time
import digitalio
import board
import adafruit_rgb_display.st7789 as st7789
import os
from subprocess import PIPE, run
from PIL import Image, ImageDraw, ImageFont

# Configuration for CS and DC pins (these are FeatherWing defaults on M0/M4):
cs_pin = digitalio.DigitalInOut(board.CE0)
dc_pin = digitalio.DigitalInOut(board.D25)
reset_pin = None

# Config for display baudrate (default max is 24mhz):
BAUDRATE = 64000000

# Setup SPI bus using hardware SPI:
spi = board.SPI()

# Create the ST7789 display:
disp = st7789.ST7789(

# Create blank image for drawing.
height = disp.width 
width = disp.height
image ="RGB", (width, height))
rotation = 180

# Get drawing object to draw on image.
draw = ImageDraw.Draw(image)

# Draw a black filled box to clear the image.
draw.rectangle((0, 0, width, height), outline=0, fill=(0, 0, 0))
disp.image(image, rotation)
x = 0

# Load a TTF font.
font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 22)

# Turn on the backlight
backlight = digitalio.DigitalInOut(board.D22)
backlight.value = True

while True:
    # Draw a black filled box to clear the image.
    draw.rectangle((0, 0, width, height), outline=0, fill=0)
    # Obtain output from last line of CSV file and create variables for each value
    output = run("tail -n 1 Output.csv", stdout=PIPE, stderr=PIPE, universal_newlines=True, shell=True)
    Ping = "Ping:" + str(output).split(",")[3]
    Download = "DL:" + str(output).split(",")[4]
    Upload = "UL:" + str(output).split(",")[5].replace("\\n'", "")

    # Draw the variables defined above to the screen.
    y = 0
    draw.text((x, y), Ping, font=font, fill="#FFFFFF")
    y += font.getsize(Ping)[1]
    draw.text((x, y), Download, font=font, fill="#FFFF00")
    y += font.getsize(Download)[1]
    draw.text((x, y), Upload, font=font, fill="#00FF00")

    # Display the image featuring the text.
    disp.image(image, rotation)

Using PowerShell to Write Data to Azure Table Storage

As part of my continued quest to over-engineer and complicate things, I decided to update a script I’d recently written that performs a regular speed test of my Internet connection to store the results in Azure Table Storage rather than a local CSV file.

This was the first time that I’d used Azure Table Storage and I was pleasantly surprised how easy it was to integrate into my script, after creating the Storage Account via the Azure Portal and creating a table (named “perf”), it was simply a case of doing the following.

Step 1 – Install the PowerShell Module for Azure Table Storage

Install-Module Az
Install-Module AzTable

Step 2 – Connect to the Azure Storage Table

Before attempting to connect to the table, I needed to obtain the Access Key. To do this via the Azure Portal, I selected the Storage Account and then Access Keys, and then hit Show Keys and took a copy of the key.

I then needed to connect to the table using PowerShell (using the key obtained above), to do this I ran the following:

$StorageAccountName = "Storage Account Name" # Enter the name of the storage account e.g. "BrendgStorage"
$Key = "Access Key" # Use the Access Key obtained via the Azure Portal
$StorageContext = New-AzStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $Key # Connect to the Storage Account
$Table = (Get-AzStorageTable -Context $StorageContext | where {$ -eq "perf"}).CloudTable # Connect to the Perf table

Once this completed (without errors), I verified the $Table variable. This confirmed that I had successfully connected to the “perf” table.

Step 3 – Update Speed Test Script

I then needed to incorporate the above into the Internet test script and add logic to add the output of the Speedtest CLI tool to the Azure Table (Perf) rather than a CSV file.

The updated script can be found below, in lines 1-4 it connects to the table named perf, it then runs a continual loop that runs speedtest-cli and adds the output to the perf table using Add-AzTableRow (which it repeats every 5 minutes). As I like to live dangerously, there’s no logic to handle failures 😀.

As this is a super-simple table I’m using a single partition key (“1”) and using Ticks as the row key, I also manually specify the data type for each of the properties as the default behaviour of Add-AzTableRow is to add as a String. I used Double as the data type for the Ping, Download and Upload properties to enable me to query the data – for example to show all entries where Ping was greater than 50 (ms).

I do some string manipulation to pull out the values from the output of speedtest-cli ($Speedtest) as this simply returns a single string containing all the test results (Ping, Download and Upload).

$StorageAccountName = "Storage Account Name"
$Key = "Key"
$StorageContext = New-AzStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $Key
$Table = (Get-AzStorageTable -Context $StorageContext | where {$ -eq "perf"}).CloudTable

$i = 0
while ($i -eq 0)
    $PartitionKey = "1"
    $Time = Get-Date
    $SpeedTest = /usr/local/bin/speedtest-cli --simple
    Add-AzTableRow -table $Table -PartitionKey $PartitionKey -RowKey (Get-Date).Ticks -property @{"DateTime"=$Time;"Ping@odata.type"="Edm.Double";"Ping"=$SpeedTest[0].split(" ")[1];"Download@odata.type"="Edm.Double";"Download"=$SpeedTest[1].split(" ")[1];"Upload@odata.type"="Edm.Double";"Upload"=$SpeedTest[2].split(" ")[1]}
    Start-Sleep -Seconds 300

The script has been running for a few days now, I used Storage Explorer within the Azure Portal to view the table and confirm that data is being successfully collected. This made me realise that the DateTime property I add is redundant as the Timestamp property stores this automatically on insert.

Understanding the Table service data model (REST API) – Azure Storage | Microsoft Docs was a useful reference document as I got to grips with Azure Table Storage.

The Joys of Unreliable Internet

I’ve had a strange issue with my Internet for the last few months, it’s rock solid during the day and I have no issues at all, however from around 8pm onwards, it becomes unreliable – ping times go through the roof or I lose connectivity intermittently. This used to occur one night a week or so but for the past couple of weeks it has been happening 2-3 times a week which is seriously affecting my Netflix consumption 😀.

I have a FTTP connection and there doesn’t appear to be a fault with the fibre connection into my property as the fibre connection light on the ONT is green when the Internet grinds to a halt. I reported this to my ISP who requested I contact them when the issue is active so that they can perform some diagnostics.

I decided to collect some data on the issue to help me identify any patterns with this and also as evidence for my ISP. As I mentioned in a previous post I have a lot of spare Raspberry Pi’s so decided to put one of them to some good use!

I connected the Pi directly via Ethernet to my router and wrote a quick and dirty PowerShell script that uses the Speedtest CLI Python script written by Matt Martz to perform a speed test of my connection every 5 minutes. Yes, you read that correctly – you can run PowerShell on the Raspberry Pi, here is a guide on how to set this up. I used PowerShell to call the Python script for no other reason than I’d never done it before so it seemed like a good experiment.

Below is the script that I ran, this uses the Speedtest CLI to perform a test every 5 minutes and writes the output to a CSV file.

$i = 0
while ($i -eq 0)
    $Time = Get-Date
    $SpeedTest = /usr/local/bin/speedtest-cli --simple
    $Time.ToString() + "," + $SpeedTest[0] + "," + $SpeedTest[1] + "," + $SpeedTest[2]  >> /home/pi/Share/SpeedTest.csv 
    Start-Sleep -Seconds 300

Here is what the output looks like in Excel, I’m going to collect data for a few days before I crack open Power BI and do some analysis of the data.