Installing and Update PowerShell Core on Windows using

This is more of a note for my future self than anything that is earth shattering!

Windows Terminal (which I πŸ’—), recently notified me that I needed to update PowerShell Core. I could have clicked the link and downloaded and installed the updated MSI, however I’m lazy and wanted a quicker way to do this πŸƒβ€β™‚οΈ.

It turns out that PowerShell Core can easily be installed and updated on Windows using winget – what is winget you may ask?!?

The winget command line tool enables you to discover, install, upgrade, remove and configure applications on Windows 10 and Windows 11 computers. This tool is the client interface to the Windows Package Manager service.

Installing PowerShell Core using winget

winget install --id Microsoft.Powershell --source winget

Upgrading PowerShell Core using winget

winget upgrade --id Microsoft.Powershell --source winget

I force the source to winget rather than msstore as there are some limitations with the version of PowerShell Core available from the Microsoft Store (msstore), which are documented here (excerpt from the documentation below).

Dockerizing a PowerShell Script

As I mentioned in my previous post, I’m currently in the process of consolidating the array of Raspberry Pi’s I have running around my house by migrating the various workloads running on them to Docker containers running on a single Raspberry Pi 4 that I have.

After my exploits migrating Pi-hole to Docker (which was far simpler than I anticipated!), next up was migrating a PowerShell script that I run every 5 minutes, which checks the speed of my Internet connection using the Speedtest CLI (which is written in Python) and writes the results to a CSV file.

Why do I do this? Check out “The Joys of Unreliable Internet” which explains more!

To Dockerize the script, I needed to find a container image that runs PowerShell which I could install Python on that supported ARM (the CPU that the Pi uses) – it seemed easier doing this than using a Linux image running Python and installing PowerShell. Fortunately, I found this image which was perfect for my needs, the GitHub repo for this image can be found here.

I also needed a way to store the CSV file that the output of the script is written to on the host machine, this was to ensure this persisted and I didn’t lose any of the logging data. For this reason, I needed to use Docker Compose to create the container as this provides the ability to expose a directory on the host machine directly to the container.

Here is my end solution in all it’s glory!

Firstly is the Dockerfile, which pulls this image, installs Python, creates a new directory “/speedtest” and then copies the SpeedTest.ps1 PowerShell script (which you can find here) into this directory and sets this to run on container startup.

You may be wondering why I’m changing the shell (using SHELL), I needed to do this to change the shell from PowerShell so that I could install Python, I then flip back to PowerShell to run the script. I also needed to run “update-ca-certificates –fresh” as I was experiencing some certificate errors that were causing the SpeedTest.ps1 script to fail.

Dockerfile

FROM clowa/powershell-core:latest

SHELL ["/bin/sh", "-c"]
RUN apt-get update -y
RUN apt-get install -y python3
RUN apt-get install -y python3-pip
RUN pip3 install speedtest-cli
RUN update-ca-certificates --fresh

SHELL ["pwsh", "-command"]
RUN mkdir speedtest
COPY ./SpeedTest.ps1 /speedtest/SpeedTest.ps1
WORKDIR /speedtest
ENTRYPOINT ["pwsh"]
CMD ["SpeedTest.ps1"]

To map a directory on the host machine to the container, I needed to create the container using Docker Compose (rather than simply using a Dockerfile) and below is the docker-compose.yml file that I created.

This names the container runner and maps “/home/pi/speedtest/logs on the host machine to “/etc/speedtest/logs” within the container and configures the container to restart should the SpeedTest.ps1 script exit using “restart: unless-stopped” restart policy.

docker-compose.yml

services:
  runner:
    build: ./
    volumes:
      - /home/pi/speedtest/logs:/etc/speedtest/logs
    restart: unless-stopped

Finally, here is the SpeedTest.ps1 script, which executes the speedtest-cli Python script and writes the output to “/etc/speedtest/logs/SpeedTest.csv” within the container, which is mapped to “/home/pi/speedtest/logs/SpeedTest.csv” on the host machine.

SpeedTest.ps1

$i = 0
while ($i -eq 0)
{
    $Time = Get-Date
    $SpeedTest = speedtest-cli --simple
    $Time.ToString() + "," + $SpeedTest[0].split(" ")[1] + "," + $SpeedTest[1].split(" ")[1] + "," + $SpeedTest[2].split(" ")[1]  >> "/etc/speedtest/logs/SpeedTest.csv"
    Start-Sleep -Seconds 300
}

To get this container up and running I created a directory on the host machine “/home/pi/speedtest” and placed the three files within this directory:

  • SpeedTest.ps1
  • Dockerfile
  • docker-compose.yml

I then executed “docker-compose up -d” from within the “/home/pi/speedtest” directory to build, create and start the container, -d runs the container in detached (background) mode rather than interactively.

I then waited a while and checked the SpeedTest.csv log file within “/home/pi/speedtest/logs” to confirm that the script was running!

Result….now on to my next Dockerization project!

The three files used to create this container can be found on GitHub here.

Adventures in running Pi-hole within a Docker container on a Raspberry Pi

Pi-hole is a DNS sinkhole that protects devices from unwanted content, without installing any client-side software. I’ve run Pi-hole on a Raspberry Pi Zero for the last year or so and have found it easy to use (it’s literally set and forget) and super effective. I have a proliferation of Pi’s around my house and wanted to consolidate them by migrating the various workloads running on them to Docker containers running on a single Raspberry Pi 4 that I have.

I decided to start with migrating Pi-hole from my Pi Zero to a Docker container running on my Pi 4, I chose to do this first as there is a pre-built Pi-hole image for Docker and fantastic documentation.

The first step was to build the Pi 4, I used the Raspberry Pi Imager tool to prepare an SD card with Raspberry Pi OS (formally known as Raspbian). As I’m running this headless, I used the advanced options within the tool to configure the hostname of the device, enable SSH, set the locale and configure a password – it saved the hassle of plugging in a keyboard and monitor and doing this manually post-install.

Once my Pi 4 was running I connected over SSH (which you can do in Windows by running ssh pi@hostname) and enabled VNC via raspi-config which also gives me GUI access to the Pi.

I then needed to install Docker and Docker Compose, I previously posted about how to do this here. Here are the commands I ran on the Pi to do this:

sudo apt-get update && sudo apt-get upgrade
curl -sSL https://get.docker.com | sh
sudo usermod -aG docker ${USER}
sudo pip3 install docker-compose
sudo systemctl enable docker

Once this had completed, which took less than 5 minutes, I rebooted the Pi (sudo reboot from a terminal).

Now I had the Pi up and running along with Docker, I could create the Pi-hole container. To do this I took the example Docker Compose YAML file and edited it to meet my requirements – saving this as docker-compose.yml:

  • Run in host mode – by specifying network_mode: “host”. This setting is described here, it means that the container will share an IP address with the host machine (in my case the Pi 4). I used this to keep things simple, I may regret this decision at a later date πŸ€¦β€β™‚οΈ.
  • Configure the hostname – using container_name. I actually kept this as the default setting of pihole.
  • Set the timezone – setting this to EUROPE/LONDON, using this article to determine the correct value 🌍.
  • Specify a password for the web admin interface This is configured with WEBPASSWORD, I used a password slightly more complex than “password” πŸ”’.

A copy of the docker-compose.yml file I created can also be found here.

version: "3"

# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
services:
  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    # For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
    network_mode: "host"
    environment:
      TZ: 'Europe/London'
      WEBPASSWORD: 'password'
    # Volumes store your data between container upgrades
    volumes:
      - './etc-pihole:/etc/pihole'
      - './etc-dnsmasq.d:/etc/dnsmasq.d'    
    restart: unless-stopped

I then created a directory within /home/pi named “pihole”, copied the docker-compose.yml file into this and then ran the following command from within this directory to build and run the container:

docker-compose up -d

Within a few minutes I had a shiny new Pi-hole container up and running!

Next step was to update the DHCP settings on my router to use Pi-hole as the default DNS server it provides to devices on my network. I did this by specifying the IP address of the Pi 4 as the preferred DNS server for DHCP clients, I obtained the IP address of the Pi by running ifconfig from a terminal (I know I should really be using a static IP Address on the Pi πŸ˜‰). I won’t cover how I updated my router, due to the multitude of different routers out there. I then ran ipconfig /release and ipconfig /renew on my Windows machine to refresh the DNS settings, my other devices will pick up the new settings when they renew their DHCP lease, which is daily.

I then browsed to the web interface using http://hostname/admin – in my case http://pi4.local/admin hit the login button and authenticated using the password I’d specified in the docker-compose.yml file.

The Pi-hole container had been running for around an hour, with minimal web activity (as I was writing this post) when I took this screenshot – it’s staggering the number of queries that it had blocked 😲.

Why is my Python code so slow?!?

I begrudgingly started using Python when I got my first Raspberry Pi (as PowerShell wasn’t available for the Pi at that point). As a non-developer my development experience was limited to writing PowerShell, with a sprinkling of C#.

The learning curve wasn’t too bad, I found Automate the Boring Stuff with Python an invaluable resource, along with the associated course on Udemy. I have grown to love Python over the last few years, it’s so flexible and there are a ton of resources out there to help when you get stuck (as I invariably do!) πŸ˜•.

The one thing I love about working in tech is the constant learning – every day is a school day! I learnt last week how to profile code in Python using a simple one-line command courtesy of @Lee_Holmes:

python -m cProfile -s cumtime 'C:\Profiling.py'

This command inspects a script (in the example above Profiling.py) and outputs details of the cumulative execution time of each function within the script. This is super-useful when trying to pinpoint why a script is running so slowly 🐌.

I wrote the script below (Profiling.py) which contains three functions which are each called once, each of these functions includes a sleep to simulate their execution time – with function 1 being the quickest and function 3 the slowest. I then ran the profiler on this to see what it reported πŸ”.

You can see in the output below that the profiler reports the cumulative execution time of each function – the script only calls each function once, so this time matches the sleep I included within the function. In the real-world where each function is likely to get called multiple times, you’d have a better view of where time is being spent in execution and identify opportunities to tweak specific functions to reduce the overall execution time.

I’m sure I’ll be putting this newfound knowledge to good use! πŸ˜€.

Continuous Integration for Nintendo Game Boy development using GitHub Actions

I previously posted about writing Hello World for the Nintendo Game Boy using the GBDK.

I’ve been meaning to spend some quality time with GitHub Actions and found the perfect excuse – doing Continuous Integration for Game Boy development πŸ˜€. I bet I can count the people that are interested in this topic in the entire world on one hand! In any case it was a great excuse to learn GitHub Actions πŸ€–.

After much trial and error, although admittedly a lot less than I thought! I ended up with the below GitHub Actions workflow, this contains a single job with multiple steps that does the following:

  • Triggers when code is pushed to the main branch of the repo
  • Checks out the code in the repo
  • Downloads a copy of GBDK from https://github.com/gbdk-2020/gbdk-2020/releases/latest/download/gbdk-win.zip and extracts the contents to the Windows based runner for the Action (this is required to compile the code)
  • Uses lcc.exe (from the GBDK) to build the C source file (main.c) into a Game Boy ROM (helloworld.gb)
  • Creates a release using a random number (generated using PowerShell) for the tag and release name
  • Uploads the helloworld.gb ROM to the release
name: Game Boy CI
# Controls when the workflow will run
on:
  # Triggers the workflow on push
  push:
    branches: [ main ]
  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:
jobs:
  # This workflow contains a single job called "build"
  build:
    # The type of runner that the job will run on
    runs-on: windows-latest
    steps:
      # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
      - uses: actions/checkout@v2
      - name: Compile C source code into .GB ROM file
        run: |
          Invoke-WebRequest -Uri https://github.com/gbdk-2020/gbdk-2020/releases/latest/download/gbdk-win.zip -Outfile GBDK.zip
          Expand-Archive GBDK.zip -DestinationPath GBDK
          ./"GBDK\gbdk\bin\lcc.exe" -o helloworld.gb main.c
          echo "RANDOM=$(Get-Random)" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
      - name: Create Release
        id: create_release
        uses: actions/create-release@v1
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # This token is provided by Actions, you do not need to create your own token
        with:
          tag_name: ${{env.RANDOM}}
          release_name: Release ${{env.RANDOM}}
          draft: false
          prerelease: false
      - name: Upload Release Asset
        id: upload-release-asset 
        uses: actions/upload-release-asset@v1
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps 
          asset_path: helloworld.gb
          asset_name: helloworld.gb
          asset_content_type: application/octet-stream

Here’s the Action in “action”

…and here is the result, a freshly compiled GB ROM.

Here is a direct link to the repo, if you’d like a closer look.

Writing “Hello World” for a Nintendo Game Boy!

I am a huge retro-gaming geek, I love reliving my childhood and playing classics such as Super Mario Land and Donkey Kong. The Nintendo Game Boy was (and still is!) one of my favourite systems. I’ve always stayed clear of doing any development on retro-gaming systems from the 80s/90s though as this typically involves writing code in assembly language and the learning curve for a non-dev like me is far too high 🧠.

I recently discovered the Game Boy Development Kit (GBDK) on GitHub, which allows you to write software for the Nintendo Game Boy (and a few other systems too!) in C.

I’m certainly no expert in C, however I was fairly sure that I could knock up “Hello World” without too much of an issue.

I downloaded the GBDK and extracted this to a folder (no install is required), I then set about writing this masterpiece –

#include <stdio.h>

void main()
{
int counter = 1;

while (counter <=16)
    {
    printf("\nHello World!");
    counter++;
    }
}

This is similar to the BASIC – 10 Print “Hello World”, 20 GOTO 10 that fills the screen with “Hello World” that I first wrote on my Amstrad CPC 6128 back in 1990 πŸ˜€.

Once I’d saved the file (naming this “helloworld.c“), I then compiled the code using the following command. This creates a ROM for the Game Boy named helloworld.gb

./gbdk\bin\lcc.exe -o helloworld.gb helloworld.c
Once the code had compiled, which literally took a second ⏱️. I then headed over to https://virtualconsoles.com/online-emulators/gameboy/ which is an online emulator for the Game Boy and loaded up my newly created ROM.

Voila!

12-year-old me would have been amazed!

Inspecting Azure Function logs from the command line πŸ”Ž

I’ve been playing around with Azure Functions recently. One thing I like about them is how you can code, test and view the logs real-time from within the Azure Portal – below you can see me testing my Burger Tax function!

One thing I was interested in doing is getting access to the logs remotely (directly from my machine rather than going through the Azure Portal). It turns out that you can do this using the Azure Functions Core Tools.

I installed the Core Tools (the Azure CLI is a pre-requisite) and after logging into Azure using az login. I could connect to the logs using this command:

func azure functionapp logstream <FunctionAppName>

In the example below, I connected to an Azure Function App named burgertax26343 (A very catchy name I know!). To over-complicate things I ran this within Ubuntu running on WSL on my Windows 11 device – you can of course run this natively on Windows, macOS and Linux.

I then fired up PowerShell to send a test request to the Azure Function App using Invoke-RestMethod (this example is a Starling Bank webhook, read more about how I’m using this here).

After running this, I flipped back to the other terminal window, where I could see the output – in this case it used Write-Host to confirm that the function had triggered and outputs the name of the merchant (Next in this case).

How an Azure Function is keeping me healthy! πŸƒβ€β™‚️

You may have heard of the Sugar Tax……introducing the Burger Tax! πŸ”πŸ’·.

I’ve created a solution using the Starling Bank developer API and an Azure Function that “taxes” me whenever I buy junk food (specifically McDonalds!), moving 20% of the transaction into a Savings Space within my Starling Bank account.

I’ve put together a video that walks through the solution end-to-end.

The code for the Azure Function is available here.

Other useful resources:

Enjoy!

Automating Azure Function App creation with the Azure CLI

I’m currently in the process of writing an Azure Function that I’ll be using with a Starling Bank webhook to “tax” myself every time I purchase junk food…..more on that in a future post though!

I love automating things and that coupled with getting bored of using the Azure Portal led me to taking a closer look at the Azure CLI, to automate the creation and configuration of the Function App.

The Azure CLI can be installed on Windows, macOS and Linux. I installed it on Ubuntu which runs on my Windows 11 device using Windows Subsystem for Linux (WSL). I wanted to experience it on a non-Windows platform, which is why I used the convoluted approach of running it on Linux on Windows πŸ˜€. Installation was straightforward and required a single command:

curl -L https://aka.ms/InstallAzureCli | bash

My aim was to use the Azure CLI to create an Azure Function App that runs on Windows with the PowerShell runtime based in the UK South region, I also wanted to add a key/value pair to the Application Settings which I will use to store my Personal Access Token from Starling. The PAT will be used to connect to my bank account and β€œtax” fast food purchases! I could/should have used Azure Key Vault for this but didn’t want to introduce extra complexity into a hobby project.

After logging into my Azure subscription using az login from Ubuntu I ran the following to declare the appname and region variables. I’m lazy and use appname for the Resource Group, Storage Account and Function App name. I used the Bash function $RANDOM to append a random number to the app name, which was useful during testing, so I didn’t have to update the app name manually after each run of the script (and there were many as I got to grips with the syntax of the Azure CLI!)

appname=burgertax$RANDOM
region=uksouth

I then created a Resource Group to house the Function App, located in the UK South region and named burgertax (appended with a random number).

az group create --name $appname --location $region

Once the Resource Group had been created, I created a Storage Account which is used to host the content of the Function App.

az storage account create \
  --name $appname \
  --location $region \
  --resource-group $appname \
  --sku Standard_LRS

Now that I had a Resource Group and Storage Account, I could create the Function App.

az functionapp create \
  --name $appname \
  --storage-account $appname \
  --consumption-plan-location $region \
  --resource-group $appname \
  --os-type Windows \
  --runtime powershell \
  --functions-version 3

Once this had completed, I ran the following to add a key/value to the Application Settings of the Function App

az functionapp config appsettings set --name $appname --resource-group $appname --settings "MyKey=MyValue"

…and this to verify that it had created the Application Setting.

az functionapp config appsettings list --name $appname --resource-group $appname

Lastly, I fired up the Azure Portal to admire my automation.

Now the fun part……writing the code for my “Burger Tax” solution πŸ”.

Inspecting webhook calls with ngrok

I’ve been experimenting with the Starling Bank developer APIs, which I wrote about in Get access to my bank account using Python….why not?!? and Creating an Alexa skill to read my bank balance.

The next thing on my list to experiment with was creating a webhook and getting this to call an Azure Function to “do something”………which I’m yet to decide exactly what πŸ˜€. It’s possible to create a webhook to fire for three specific event types (or any combination of the three) with Starling Bank:

  • Feed Item (for example making a purchase)
  • Standing Order (creating/editing a standing order)
  • Standing Order Payment

Before I started jumping into creating an Azure Function to “do something” based upon a webhook call from my Starling Bank account, I needed to see what these calls look like, so that I could figure out the data I had to play with. This is where I had the bright idea of using ngrok. My theory was that I could publish an endpoint on my local machine, publish this to the Internet and configure the Starling Bank webhook to call this public URL and then inspect the request made. I’ve previously written about ngrok here.

This is how I did it:

Step 1 – Configure ngrok

I downloaded the ngrok agent from here and copied it to my Raspberry Pi (you can also run on Windows, macOS and other variants of Linux).

I also created a free account with ngrok as this unlocks some additional features, this provides an Authtoken to authenticate your ngrok agent.

I then ran the following to register my Authtoken from the directory that ngrok had been copied to (this isn’t my Authtoken BTW)

Once that completed, I then created a tunnel:
ngrok http 80

Once this command had run, it provided the external URLs that the local port (80) has been exposed to (by default it will create HTTP and HTTPS endpoint if the command above is run). It’s then as simple as configuring the webhook to call the HTTPS endpoint that has been exposed.

Step 2 – Configure the Starling Bank Webhook

I logged into my developer account and created the webhook, I called this ngrok and in the Payload URL entered the public endpoint HTTPS exposed by ngrok. I selected Feed Item as I’m only interested in data generated by purchases and then hit Create V2 Webhook.

Step 3 – Create a Transaction

To get the webhook to fire so that I could inspect it using ngrok I needed to spend some money πŸ’·! I headed over to Amazon and bought something using my Starling Bank account.

Step 4 – Inspecting the Webhook Call

I then flipped back over to my Raspberry Pi and launched the Web Interface for ngrok – which by default is http://127.0.0.1:4040.

From here I could see the call made by the webhook, this had failed because I didn’t have anything listening on port 80, which is fine – all I wanted was to inspect the webhook call to see the JSON included. Which I could in all its glory:

Now that I know what is included, I’m going to create an Azure Function that the webhook will call (instead of the current ngrok black hole) to do something with the transaction data……I am tempted to “tax” myself when I buy fast food by transferring a percentage of the transaction to a savings “space” within my account. I should be able to use the counterPartyName and amount to do this.