In the two months that I’ve had Pi-hole up and running, the Docker image has been updated twice. I put together the following script that automates the process of deleting the container and image, and then rebuilding using the latest available image, which I run every time a new image is released ๐ค.
Configuration and logs are saved as Pi-hole stores these on the host system rather than directly within the container itself, therefore no need to worry about losing these between updates.
Just make sure you have secondary DNS setup within your network otherwise when the Pi-hole container is stopped DNS resolution may fail.
This is more of a note for my future self than anything that is earth shattering!
Windows Terminal (which I ๐), recently notified me that I needed to update PowerShell Core. I could have clicked the link and downloaded and installed the updated MSI, however I’m lazy and wanted a quicker way to do this ๐โโ๏ธ.
It turns out that PowerShell Core can easily be installed and updated on Windows using winget – what is winget you may ask?!?
The winget command line tool enables you to discover, install, upgrade, remove and configure applications on Windows 10 and Windows 11 computers. This tool is the client interface to the Windows Package Manager service.
I force the source to winget rather than msstore as there are some limitations with the version of PowerShell Core available from the Microsoft Store (msstore), which are documented here (excerpt from the documentation below).
As I mentioned in my previous post, I’m currently in the process of consolidating the array of Raspberry Pi’s I have running around my house by migrating the various workloads running on them to Docker containers running on a single Raspberry Pi 4 that I have.
After my exploits migrating Pi-hole to Docker (which was far simpler than I anticipated!), next up was migrating a PowerShell script that I run every 5 minutes, which checks the speed of my Internet connection using the Speedtest CLI (which is written in Python) and writes the results to a CSV file.
To Dockerize the script, I needed to find a container image that runs PowerShell which I could install Python on that supported ARM (the CPU that the Pi uses) – it seemed easier doing this than using a Linux image running Python and installing PowerShell. Fortunately, I found this image which was perfect for my needs, the GitHub repo for this image can be found here.
I also needed a way to store the CSV file that the script writes its output to on the host machine (rather than the container itself), this was to ensure that this persisted, and I didn’t lose any logging data. I decided to use Docker Compose to create the container as this provides a straightforward way to expose a directory on the host machine directly to the container.
Here is my end solution in all it’s glory!
Firstly is the Dockerfile, which pulls this image, installs Python, creates a new directory “/speedtest” and then copies the SpeedTest.ps1 PowerShell script (which you can find here) into this directory and sets this to run on container startup.
You may be wondering why I’m changing the shell (using SHELL), I needed to do this to change the shell from PowerShell so that I could install Python, I then flip back to PowerShell to run the script. I also needed to run “update-ca-certificates –fresh” as I was experiencing some certificate errors that were causing the SpeedTest.ps1 script to fail.
Dockerfile
FROM clowa/powershell-core:latest
SHELL ["/bin/sh", "-c"]
RUN apt-get update -y
RUN apt-get install -y python3
RUN apt-get install -y python3-pip
RUN pip3 install speedtest-cli
RUN update-ca-certificates --fresh
SHELL ["pwsh", "-command"]
RUN mkdir speedtest
COPY ./SpeedTest.ps1 /speedtest/SpeedTest.ps1
WORKDIR /speedtest
ENTRYPOINT ["pwsh"]
CMD ["SpeedTest.ps1"]
To map a directory on the host machine to the container, I used Docker Compose (rather than using a Dockerfile as this approach was simpler). Below is the docker-compose.yml file that I created.
This names the container runner and maps “/home/pi/speedtest/logs“on the host machine to “/etc/speedtest/logs” within the container. It also configures the container to restart should the SpeedTest.ps1 script exit using the “restart: unless-stopped” restart policy.
Finally, here is the SpeedTest.ps1 script, which executes the speedtest-cli Python script and writes the output to “/etc/speedtest/logs/SpeedTest.csv” within the container, which is mapped to “/home/pi/speedtest/logs/SpeedTest.csv” on the host machine.
To get this container up and running I created a directory on the host machine “/home/pi/speedtest” and placed the three files within this directory:
SpeedTest.ps1
Dockerfile
docker-compose.yml
I then executed “docker-compose up -d” from within the “/home/pi/speedtest” directory to build, create and start the container, -d runs the container in detached (background) mode rather than interactively.
I then waited a while and checked the SpeedTest.csv log file within “/home/pi/speedtest/logs” to confirm that the script was running!
Result….now on to my next Dockerization project!
The three files used to create this container can be found on GitHub here.
Pi-hole is a DNS sinkhole that protects devices from unwanted content, without installing any client-side software. I’ve run Pi-hole on a Raspberry Pi Zero for the last year or so and have found it easy to use (it’s literally set and forget) and super effective. I have a proliferation of Pi’s around my house and wanted to consolidate them by migrating the various workloads running on them to Docker containers running on a single Raspberry Pi 4 that I have.
I decided to start with migrating Pi-hole from my Pi Zero to a Docker container running on my Pi 4, I chose to do this first as there is a pre-built Pi-hole image for Docker and fantastic documentation.
The first step was to build the Pi 4, I used the Raspberry Pi Imager tool to prepare an SD card with Raspberry Pi OS (formally known as Raspbian). As I’m running this headless, I used the advanced options within the tool to configure the hostname of the device, enable SSH, set the locale and configure a password – it saved the hassle of plugging in a keyboard and monitor and doing this manually post-install.
Once my Pi 4 was running I connected over SSH (which you can do in Windows by running ssh pi@hostname) and enabled VNC via raspi-config which also gives me GUI access to the Pi.
I then needed to install Docker and Docker Compose, I previously posted about how to do this here. Here are the commands I ran on the Pi to do this:
Once this had completed, which took less than 5 minutes, I rebooted the Pi (sudo reboot from a terminal).
Now I had the Pi up and running along with Docker, I could create the Pi-hole container. To do this I took the example Docker Compose YAML file and edited it to meet my requirements – saving this as docker-compose.yml:
Run in host mode – by specifying network_mode: “host”. This setting is described here, it means that the container will share an IP address with the host machine (in my case the Pi 4). I used this to keep things simple, I may regret this decision at a later date ๐คฆโโ๏ธ.
Configure the hostname – using container_name. I actually kept this as the default setting of pihole.
Set the timezone – setting this to EUROPE/LONDON, using this article to determine the correct value ๐.
Specify a password for the web admin interface –This is configured with WEBPASSWORD, I used a password slightly more complex than “password” ๐.
A copy of the docker-compose.yml file I created can also be found here.
version: "3"
# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
# For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
network_mode: "host"
environment:
TZ: 'Europe/London'
WEBPASSWORD: 'password'
# Volumes store your data between container upgrades
volumes:
- './etc-pihole:/etc/pihole'
- './etc-dnsmasq.d:/etc/dnsmasq.d'
restart: unless-stopped
I then created a directory within /home/pi named “pihole”, copied the docker-compose.yml file into this and then ran the following command from within this directory to build and run the container:
docker-compose up -d
Within a few minutes I had a shiny new Pi-hole container up and running!
Next step was to update the DHCP settings on my router to use Pi-hole as the default DNS server it provides to devices on my network. I did this by specifying the IP address of the Pi 4 as the preferred DNS server for DHCP clients, I obtained the IP address of the Pi by running ifconfig from a terminal (I know I should really be using a static IP Address on the Pi ๐). I won’t cover how I updated my router, due to the multitude of different routers out there. I then ran ipconfig /release and ipconfig /renewon my Windows machine to refresh the DNS settings, my other devices will pick up the new settings when they renew their DHCP lease, which is daily.
I then browsed to the web interface using http://hostname/admin – in my case http://pi4.local/admin hit the login button and authenticated using the password I’d specified in the docker-compose.yml file.
The Pi-hole container had been running for around an hour, with minimal web activity (as I was writing this post) when I took this screenshot – it’s staggering the number of queries that it had blocked ๐ฒ.
I begrudgingly started using Python when I got my first Raspberry Pi (as PowerShell wasn’t available for the Pi at that point). As a non-developer my development experience was limited to writing PowerShell, with a sprinkling of C#.
The learning curve wasn’t too bad, I found Automate the Boring Stuff with Python an invaluable resource, along with the associated course on Udemy. I have grown to love Python over the last few years, it’s so flexible and there are a ton of resources out there to help when you get stuck (as I invariably do!) ๐.
The one thing I love about working in tech is the constant learning – every day is a school day! I learnt last week how to profile code in Python using a simple one-line command courtesy of @Lee_Holmes:
python -m cProfile -s cumtime 'C:\Profiling.py'
This command inspects a script (in the example above Profiling.py) and outputs details of the cumulative execution time of each function within the script. This is super-useful when trying to pinpoint why a script is running so slowly ๐.
I wrote the script below (Profiling.py) which contains three functions which are each called once, each of these functions includes a sleep to simulate their execution time – with function 1 being the quickest and function 3 the slowest. I then ran the profiler on this to see what it reported ๐.
You can see in the output below that the profiler reports the cumulative execution time of each function – the script only calls each function once, so this time matches the sleep I included within the function. In the real-world where each function is likely to get called multiple times, you’d have a better view of where time is being spent in execution and identify opportunities to tweak specific functions to reduce the overall execution time.
I’m sure I’ll be putting this newfound knowledge to good use! ๐.
I previously posted about writing Hello World for the Nintendo Game Boy using the GBDK.
I’ve been meaning to spend some quality time with GitHub Actions and found the perfect excuse – doing Continuous Integration for Game Boy development ๐. I bet I can count the people that are interested in this topic in the entire world on one hand! In any case it was a great excuse to learn GitHub Actions ๐ค.
After much trial and error, although admittedly a lot less than I thought! I ended up with the below GitHub Actions workflow, this contains a single job with multiple steps that does the following:
Triggers when code is pushed to the main branch of the repo
Uses lcc.exe (from the GBDK) to build the C source file (main.c) into a Game Boy ROM (helloworld.gb)
Creates a release using a random number (generated using PowerShell) for the tag and release name
Uploads the helloworld.gb ROM to the release
name: Game Boy CI
# Controls when the workflow will run
on:
# Triggers the workflow on push
push:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: windows-latest
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2
- name: Compile C source code into .GB ROM file
run: |
Invoke-WebRequest -Uri https://github.com/gbdk-2020/gbdk-2020/releases/latest/download/gbdk-win.zip -Outfile GBDK.zip
Expand-Archive GBDK.zip -DestinationPath GBDK
./"GBDK\gbdk\bin\lcc.exe" -o helloworld.gb main.c
echo "RANDOM=$(Get-Random)" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # This token is provided by Actions, you do not need to create your own token
with:
tag_name: ${{env.RANDOM}}
release_name: Release ${{env.RANDOM}}
draft: false
prerelease: false
- name: Upload Release Asset
id: upload-release-asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: helloworld.gb
asset_name: helloworld.gb
asset_content_type: application/octet-stream
Here’s the Action in “action”
…and here is the result, a freshly compiled GB ROM.
Here is a direct link to the repo, if you’d like a closer look.
I am a huge retro-gaming geek, I love reliving my childhood and playing classics such as Super Mario Land and Donkey Kong. The Nintendo Game Boy was (and still is!) one of my favourite systems. I’ve always stayed clear of doing any development on retro-gaming systems from the 80s/90s though as this typically involves writing code in assembly language and the learning curve for a non-dev like me is far too high ๐ง .
I recently discovered the Game Boy Development Kit (GBDK) on GitHub, which allows you to write software for the Nintendo Game Boy (and a few other systems too!) in C.
I’m certainly no expert in C, however I was fairly sure that I could knock up “Hello World” without too much of an issue.
I downloaded the GBDK and extracted this to a folder (no install is required), I then set about writing this masterpiece –
#include <stdio.h>
void main()
{
int counter = 1;
while (counter <=16)
{
printf("\nHello World!");
counter++;
}
}
This is similar to the BASIC – 10 Print “Hello World”, 20 GOTO 10 that fills the screen with “Hello World” that I first wrote on my Amstrad CPC 6128 back in 1990 ๐.
Once I’d saved the file (naming this “helloworld.c“), I then compiled the code using the following command. This creates a ROM for the Game Boy named helloworld.gb
./gbdk\bin\lcc.exe -o helloworld.gb helloworld.c
Once the code had compiled, which literally took a second โฑ๏ธ. I then headed over to https://virtualconsoles.com/online-emulators/gameboy/ which is an online emulator for the Game Boy and loaded up my newly created ROM.
I’ve been playing around with Azure Functions recently. One thing I like about them is how you can code, test and view the logs real-time from within the Azure Portal – below you can see me testing my Burger Tax function!
One thing I was interested in doing is getting access to the logs remotely (directly from my machine rather than going through the Azure Portal). It turns out that you can do this using the Azure Functions Core Tools.
I installed the Core Tools (the Azure CLI is a pre-requisite) and after logging into Azure using az login. I could connect to the logs using this command:
In the example below, I connected to an Azure Function App named burgertax26343 (A very catchy name I know!). To over-complicate things I ran this within Ubuntu running on WSL on my Windows 11 device – you can of course run this natively on Windows, macOS and Linux.
I then fired up PowerShell to send a test request to the Azure Function App using Invoke-RestMethod (this example is a Starling Bank webhook, read more about how I’m using this here).
After running this, I flipped back to the other terminal window, where I could see the output – in this case it used Write-Host to confirm that the function had triggered and outputs the name of the merchant (Next in this case).
You may have heard of the Sugar Tax……introducing the Burger Tax! ๐๐ท.
I’ve created a solution using the Starling Bank developer API and an Azure Function that “taxes” me whenever I buy junk food (specifically McDonalds!), moving 20% of the transaction into a Savings Space within my Starling Bank account.
I’ve put together a video that walks through the solution end-to-end.
The code for the Azure Function is available here.
I’m currently in the process of writing an Azure Function that I’ll be using with a Starling Bank webhook to “tax” myself every time I purchase junk food…..more on that in a future post though!
I love automating things and that coupled with getting bored of using the Azure Portal led me to taking a closer look at the Azure CLI, to automate the creation and configuration of the Function App.
The Azure CLI can be installed on Windows, macOS and Linux. I installed it on Ubuntu which runs on my Windows 11 device using Windows Subsystem for Linux (WSL). I wanted to experience it on a non-Windows platform, which is why I used the convoluted approach of running it on Linux on Windows ๐. Installation was straightforward and required a single command:
curl -L https://aka.ms/InstallAzureCli | bash
My aim was to use the Azure CLI to create an Azure Function App that runs on Windows with the PowerShell runtime based in the UK South region, I also wanted to add a key/value pair to the Application Settings which I will use to store my Personal Access Token from Starling. The PAT will be used to connect to my bank account and โtaxโ fast food purchases! I could/should have used Azure Key Vault for this but didnโt want to introduce extra complexity into a hobby project.
After logging into my Azure subscription using az login from Ubuntu I ran the following to declare the appname and region variables. I’m lazy and use appname for the Resource Group, Storage Account and Function App name. I used the Bash function $RANDOM to append a random number to the app name, which was useful during testing, so I didn’t have to update the app name manually after each run of the script (and there were many as I got to grips with the syntax of the Azure CLI!)
appname=burgertax$RANDOM
region=uksouth
I then created a Resource Group to house the Function App, located in the UK South region and named burgertax (appended with a random number).
az group create --name $appname --location $region
Once the Resource Group had been created, I created a Storage Account which is used to host the content of the Function App.