• Creating a license plate detector using Azure Cognitive Services and a Raspberry Pi 🚘

    During the recent internal Microsoft Hackathon, I was part of a team that developed a prototype solution to manage EV charging stations within an office, to enable employees to book a timeslot for charging their vehicle and be assigned an available charging station at the selected time.

    With the rise of EV’s it’s likely that the management of EV charging within an office will become a problem shortly (if not already!) therefore this was the perfect challenge for us to tackle!

    My contribution to this solution was license plate detection – we needed to be sure that employee’s pulling into an EV charging bay had a valid booking, therefore I needed to create something that would detect a car in the charging bay, read it’s license plate and then pass this to the back-end to confirm that the vehicle has a valid booking, the plan was then to enable the EV charger if the booking was confirmed (we still need to build that part!).

    I put together a prototype solution using a Raspberry Pi Model 3B+, Camera Module, a PIR sensor, Azure Cognitive Services (Computer Vision) and Python.

    For my “state of the art” prototype I also created some EV bays using a piece of paper, borrowed one of my son’s toy cars (which I stuck a homemade license plate to).

    The solution does the following:

    1. Uses the PIR to detect a car entering the charging bay
    2. Uses the Raspberry Pi Camera to take a photo of the license plate
    3. Submits the photo to Azure Cognitive Services Computer Vision to detect the text on the license plate
    4. Returns the detected text

    In the full solution after returning the detected text, this is passed to a back end to confirm that the booking is valid, however this is out of scope for this post (although I may cover that in a future post).

    Here’s a short video of it in action (ignore the mention of the Logic App, this is what we are using to connect to the back-end to validate the booking):

    Here’s the Python script that I created to do this, which can also be found on GitHub

    import requests
    import json
    import time
    from io import BytesIO
    from picamera import PiCamera
    from gpiozero import MotionSensor
    
    pir = MotionSensor(4)
              
    def take_image():
        print("Taking photo of reg plate...")
        camera = PiCamera()
        camera.rotation = 180 # depending on how the camera is placed, this line may need to be removed
        camera.start_preview()
        time.sleep(3)
        camera.capture("regplate.jpg")
        camera.stop_preview()
        camera.close() 
        print("Photo taken successfully!")
    
    def analyze_image(image):
        print("Analyzing photo...")
        url = "https://RESOURCENAME.cognitiveservices.azure.com/vision/v3.0/read/analyze" # Endpoint URL for Azure Cognitive Services
        key = "KEY" # Key for Azure Cognitive Services
    
        image_path = image
        image_data = open(image_path, "rb").read()
    
        headers = {"Ocp-Apim-Subscription-Key" : key,'Content-Type': 'application/octet-stream'}
        r = requests.post(url,headers = headers, data=image_data)
    
        operation_url = r.headers["Operation-Location"]
        
        analysis = {}
        poll = True
        while (poll):
            response_final = requests.get(r.headers["Operation-Location"], headers=headers)
            analysis = response_final.json()
    
            time.sleep(1)
            if ("analyzeResult" in analysis):
                poll = False
            if ("status" in analysis and analysis['status'] == 'failed'):
                poll = False
    
        lines = []
        for line in analysis["analyzeResult"]["readResults"][0]["lines"]:
            lines.append(line["text"])
    
        print("-Reg plate analyzed as " + str(lines[0].replace(" ",""))) # Report the first string detected in the analysis - this may need to be tweaked
    
    while True:
        print("Waiting for car...")
        pir.wait_for_motion()
        print("Car detected!")
        time.sleep(2)
        take_image()
        reg = analyze_image("regplate.jpg")
        pir.wait_for_no_motion()
    

    Some points to note:

    • I used the legacy Buster Version of Raspberry Pi OS as I had some issues with the camera when running Bullseye, if you’d like to use this script with Bullseye, you’ll need to either enable the legacy camera stack OR update the take_image() function to use libcamera-jpeg.
    • The PIR was attached to GPIO4 (pin 7), VCC connected to pin 2 and GND to pin 6 – a handy reference can be found here.
    • You will need to update the url (replace RESOURCENAME with the name of your resource) and key (with your key) within the analyze_image function with your values from Azure Cognitive services, if you’ve never used it before, here is a guide on how to create a resource.
  • Adding external content to Microsoft Viva Learning 📚

    Microsoft recently released (in beta form) the ability to add external content to Viva Learning using the Microsoft Graph API, I was really excited to see this as I know that a lot of customers have been asking for this capability, in this post I’m going to do a walkthrough of adding content to Viva Learning from my YouTube channel (the process I’m going to demonstrate could be adapted to pull in content from any source).

    For those of you who aren’t familiar with Viva Learning, here’s the sales pitch:

    Viva Learning is a centralized learning hub in Microsoft Teams that lets you seamlessly integrate learning and building skills into your day. In Viva Learning, your team can discover, share, recommend, and learn from content libraries provided by both your organization and partners. They can do all of this without leaving Microsoft Teams.

    Taken from: https://learn.microsoft.com/en-us/viva/learning/overview-viva-learning

    When Viva Learning launched, it included the ability to integrate with several Learning Management Systems (LMS’s) and content providers out of the box, including – Cornerstone OnDemand, SAP SuccessFactors and Skillsoft. For customer that were using an LMS/content provider not supported OOTB there wasn’t a way integrate – until now!

    An employee learning API has been made available through the Microsoft Graph API (Beta endpoint), the documentation for this can be found here. Here’s the standard disclaimer for using the Beta endpoint (you have been warned 😀).

    In this walkthrough I will be using the Microsoft Graph PowerShell SDK to create a learning provider (which you can think of as a source) and learning content (the content surfaced from within this source).

    Step 1 – Register an App in Azure AD

    I followed the steps in the Register the app in the portal tutorial to register an app in Azure AD named Graph-VivaLearning, this app is required for the PowerShell script to authenticate to Azure AD and obtain the necessary permissions.

    Creating an app isn’t strictly necessary as the Microsoft Graph PowerShell SDK can create this automatically, however using a separate app provides greater control over permissions scopes and avoids permissions creep!

    I didn’t grant any permissions to this app as I will be using dynamic consent, which is easier for a demo like this.

    Step 2 – Connecting to the Microsoft Graph

    As I’m using the Microsoft Graph PowerShell SDK, the first thing I needed to do was install this using the following command (I’m using PowerShell Core):

    Install-Module Microsoft.Graph -Scope CurrentUser
    

    Once this has been installed, I could then connect to the Microsoft Graph:

    # Authenticate to the Microsoft Graph
    $ClientId = "Client ID" # This is obtained from the app registration (screenshot above)
    $AuthTenant = "Directory ID" # This is obtained from the app registration (screenshot above)
    $GraphScopes = "LearningProvider.ReadWrite","LearningContent.ReadWrite.All"
    Connect-MgGraph -ClientId $clientId -TenantId $authTenant -Scopes $graphScopes -UseDeviceAuthentication
    

    The script above specifies the Client ID and Tenant ID (which are obtained from the app registration – screenshot above). I also specify the scopes (permissions that I require in $GraphScopes). For full access to read/write Learning Providers and Learning Content as per Create learningProvider and Update learningContent I requested the LearningProvider.ReadWrite and LearningContent.ReadWrite.All permission scopes.

    The script then uses Connect-MgGraph to connect to the Microsoft Graph using the $ClientID and $AuthTenant requesting the scopes in $GraphScopes, it uses the device code flow for authentication, which means you need to fire up a browser and enter the code requested to authenticate:

    I then needed to accept the scopes (permissions) that I had requested (as a tenant admin I was able to do this directly):

    Once this has been successful completed, I saw the following:

    To verify I was authenticated, I ran Get-MgContext which returned:
    I can see the scopes that I had requested and some other useful information.

    Step 3 – Adding a Learning Provider

    This is where the fun begins! I can now add my Learning Provider, for this walkthrough I’m going to add a provider called Brendan’s Tech Rambling (the name of this blog and my YouTube channel).

    To do this I ran the following:

    # Add a Learning Provider
    $params = @{
            "displayName" = "Brendan's Tech Ramblings"
            "squareLogoWebUrlForDarkTheme" = "https://brendg.co.uk/wp-content/uploads/2021/05/cropped-myavatar.png"
            "longLogoWebUrlForDarkTheme" = "https://brendg.co.uk/wp-content/uploads/2021/05/cropped-myavatar.png"
            "squareLogoWebUrlForLightTheme" = "https://brendg.co.uk/wp-content/uploads/2021/05/cropped-myavatar.png"
            "longLogoWebUrlForLightTheme" = "https://brendg.co.uk/wp-content/uploads/2021/05/cropped-myavatar.png"
            "isEnabled" = $true
            "loginWebUrl" = ""
    }
    
    $uri = "https://graph.microsoft.com/beta/employeeExperience/learningProviders"
    Invoke-MgGraphRequest -Method POST -uri $uri -Body $params 
    

    This specifies the settings ($params) for the Learning Provider. I’ve included a display name, logo – which needs to be publicly accessible, sets it to enabled. I haven’t specified a login URL – as login isn’t required to access the content I will be adding (it’s all hosted on YouTube).

    I then use Invoke-MgGraphRequest (as there isn’t a native Cmdlet for Viva Learning yet) to send a POST request to the endpoint for adding Learning Providers. Once this completes, I run the following to return all custom Learning Providers:

    $uri = "https://graph.microsoft.com/beta/employeeExperience/learningProviders"
    $lps = Invoke-MgGraphRequest -Method GET -uri $uri
    $lps.value
    

    Success – I can see my newly added Learning Provider!

    Step 4 – Adding Content

    Now I have my Learning Provider registered with Viva Learning (Brendan’s Tech Ramblings) I need to add some actual content. For this I decided to add a couple of YouTube videos from my channel. I used the following to do this:

    $params = @{
            "title" = "Burger Tax - using an Azure Function to Stay Healthy!"
            "description" = "Find out how I used the Starling Bank developer API and an Azure Function to tax myself whenever I buy junk food!"
            "contentWebUrl" = "https://youtu.be/z909tjuDKlY" # YouTube video URL
            "thumbnailWebUrl" = "https://brendg.co.uk/wp-content/uploads/2022/09/maxresdefault-1.jpg" # Publicly accessible URL of the content thumbnail
            "languageTag" = "en-us"
            "numberOfPages" = "1"
            "format" = "Video"
            "createdDateTime" = "2022-07-16"
    }
    

    Firstly, I specify the details of the content that I’d like to add ($params), there are additional values that can be included, I’ve used the bare minimum here.

    The one thing that was a little tricky was generating the thumbnail (thumbnailWebUrl) – I ended up downloading this from YouTube and uploading to my blog. As you’ll see later it looks so much nicer with a thumbnail included.

    I then needed to obtain the ID of the Learning Provider that I registered and store this within $lpid (this ID is required when we make the request to add content).

    $uri = "https://graph.microsoft.com/beta/employeeExperience/learningProviders"
    $lps = Invoke-MgGraphRequest -Method GET -uri $uri # Retrieve the custom Learning Provider
    $lpid = $lps.value.id # Get the ID of the custom Learning Provider 
    

    Finally, I can make the request to add the content:

    $uri = "https://graph.microsoft.com/beta/employeeExperience/learningProviders/" + $lpid + "/learningContents(externalId='BurgerTax')"
    Invoke-MgGraphRequest -Method PATCH -uri $uri -Body $params 
    

    The one thing I needed to manually specify in this is an ID for the content (externalID), I gave this the ID “BurgerTax”. This ID is used for such things as delete operations.

    Now if I browse to Viva Learning (after waiting a few hours!), I can see my Learning Provider (Brendan’s Tech Ramblings) amongst the other providers. I can also see the content that I added to this provider – including an extra video that I added, which I omitted from this walkthrough. Notice you can see my avatar (which was specified as the logo when adding the content provider) and thumbnails for each video (which was specified as thumbnailWebUrl when adding the content).

    If I select the Burger Tax video the opens a new page where the video can be watched and details of other content from my Learning Provider are presented.

    In the real world, you’d likely have a scheduled job that runs in the background adding/updating content taken from the Learning Management System (LMS) or learning provider. If I wanted to take this as step further, I could have something that adds any video I upload to YouTube directly to Viva Learning – I think that’s one for another day though!

    The code I used in this walkthrough (and some additional goodness) can be found here.

  • Creating a Mouse Jiggler using Python 🐭

    I stumbled upon a Mouse Jiggler on Amazon and was really interested what this device did 🤔. It turns out, that it’s all in the name – it literally jiggles the mouse around randomly to prevent a computer from going into sleep and also keeps you “active” on apps, such as Teams (how naughty!).

    Keeping a computer awake is one of the useful features included within the Awake utility in Microsoft PowerToys, if you run Windows this is an essential app and I highly recommend installing it….anyway, back to the mouse jiggler! I thought, what’s the point in buying a device to do this – you could in theory replicate what it does within software using Python and the PyAutoGUI library which lets Python scripts control the mouse and keyboard to automate interactions with other applications. I have used this previously to automate playing computer games, read more about my exploits here.

    I created the following masterpiece, which when running will move the mouse around the screen and presses the enter key every 5 seconds, which replicates what a mouse jiggler devices does.

    import pyautogui
    import time
    while True:
        pyautogui.click(x=100,y=100)
        pyautogui.press('enter')
        time.sleep(5)
        pyautogui.click(x=200,y=200)
        pyautogui.press('enter')
        time.sleep(5)
    

    The script can also be found on GitHub.

    I really hope my manager doesn’t see this post 🤣.

  • When will my bins (garbage) be collected? 🗑️

    This is probably the strangest title that I’ve ever given a post!

    I never seem to know when my bins (garbage for any Americans reading this) will be collected, I have three separate bins that are all on a slightly different collection cycle, rather than manually checking my local councils website I thought I’d write a script to scrape this data, to save me a few keystrokes and valuable seconds ⏲️. In all honesty, this was just an excuse to spend some quality time with Python 🐍.

    Fortunately for me, my local council’s website requires no login, also the page that returns the collection schedule for an address is static (in that the returned URL doesn’t appear to change).

    If I head over to https://www.hull.gov.uk/bins-and-recycling/bin-collections/bin-collection-day-checker, enter my postcode and select my house number it returns the following page, with the URL – https://www.hull.gov.uk/bins-and-recycling/bin-collections/bin-collection-day-checker/checker/view/10093952819.

    What I then needed to do, was to figure out an approach to pull the data from the returned page so that I could output this from a Python script. It turned out the Beautiful Soup (a Python library for pulling data out of HTML and XML files) could be used to do this.

    Before I could use Beautiful Soup to do the parsing, I needed to grab the page itself, to do this I used the Requests library.

    Firstly, I needed to install these libraries, by running “pip install requests” and then “pip install beautifulsoup4” from the command line.

    I then used the requests library to request the page and create a response object (“r“) that would hold the contents of the page.

    import requests
    r = requests.get("https://www.hull.gov.uk/bins-and-recycling/bin-collections/bin-collection-day-checker/checker/view/10093952819")
    

    Once I had the page, I could then use Beautiful Soup to analyze it, to do this I began by importing the module and then creating a new Beautiful Soup object using the “r” object created by Requests (specifically r.text), which contained the raw HTML output of the page.

    import requests
    soup = bs4.BeautifulSoup(r.text, features="html.parser")
    

    I then created a variable to hold the collection date extracted from the page (dates), which I will print to the screen at the end of the script.

    I also imported the os library (which I use for extracting the collection date from the data returned).

    import os
    dates = ""
    

    This is where the fun now began! I opened a web browser and navigated to the page and then viewed the source (CTRL + U on Chrome/Edge on Windows) as I needed to figure out exactly where the data I needed resided within the page, after much scrolling I found it!

    I could see that the data for the black bin was contained within the class “region region-content“. I used the following to extract the data I needed from this (the date of collection).

    # Use Beautiful Soup to find the class that the data is contained within "region region-content"
    blackbin = soup. Find(class_="region region-content") 
    
    # Find the parent div for this class (which I need to find the div containing the black bin data)
    div = blackbin.find_parent('div') 
    
    # Find all the span tags within this div, the data is contained within a span tag
    span = div.find_all('span') 
    
    # The black bin date is within the second span tag, so retrieve the data from this (index 1) and split using the ">" delimiter
    spantext = str(span[1]).split(">") 
    
    # Split the span tag for index 1 using "<" as a delimiter to easily remove the other text we don't need, it's messy but it works!
    date = spantext[1].split("<") 
    
    # Retrieve index 0 which is the full date
    blackbindate = date[0] 
    
    # Add the data returned "blackbindate" to the "dates" variable, prefixing this with the colour of the bin
    dates += "Black Bin " + "- " + blackbindate + "," + "\n"
    

    For the blue bin, I took a slightly different approach. I searched for the style tag that this was using, which was “color:blue;font-weight:800

    # Find all tags using the style color:blue;font-weight:800
    blue = soup.find_all(style="color:blue;font-weight:800")
    
    # Select the second tag returned, this one contains the actual date and then split using the ">" delimiter
    bluebin = str(blue[1]).split(">")
    
    # Split the data returned further using the delimiter "<", to easily remove the other text we don't need, it's messy but it works!
    bluebincollection = bluebin[1].split("<")
    
    # Return index 0 which is the full date
    bluebindate = bluebincollection[0]
    
    # Add the returned date "bluebindate" to the dates variable, prefixing this with the colour of the bin
    dates += "Blue Bin " + "- " + bluebindate + "," + "\n"
    

    Lastly, for my brown bin I used a slightly variation of the approach I used for the blue bin, except this time I searched for the style tag “color:#654321;font-weight:800

    brown = soup.find_all(style="color:#654321;font-weight:800")
    brownbin = str(brown[1]).split(">")
    brownbincollection = brownbin[1].split("<")
    brownbindate = brownbincollection[0]
    dates += "Brown Bin " + "- " + brownbindate + "," + "\n"
    

    Lastly I returned the dates variable, which contained the dates extracted from the web page.

    print(dates)
    

    Here is the output in all it’s glory!

    The script I wrote can be found on GitHub.

  • Configuring RetroPie Samba shares to require authentication 🔒

    As a HUGE retro gaming fan 🕹️, I absolutely adore RetroPie which turns my Raspberry Pi 4 into an emulation powerhouse 👾! Here’s some blurb from their official site that explains more:

    RetroPie allows you to turn your Raspberry Pi, ODroid C1/C2, or PC into a retro-gaming machine. It builds upon Raspbian, EmulationStation, RetroArch and many other projects to enable you to play your favourite Arcade, home-console, and classic PC games with the minimum set-up. For power users it also provides a large variety of configuration tools to customise the system as you want.

    RetroPie sits on top of a full OS, you can install it on an existing Raspbian, or start with the RetroPie image and add additional software later. It’s up to you.

    One of the methods to copy data to RetroPie (for example ROMs and BIOS files) is to connect using SMB, RetroPie comes pre-configured with Samba which is a Linux re-implementation of SMB.

    On Windows, it’s as simple as opening \\RETROPIE or \\IP Address of RetroPie to connect to RetroPie and copy files across.

    One issue and slight concern I have is that in its default configuration, the shares created by RetroPie are available without authentication. I first realised this when I saw the following error on my Windows PC when trying to connect to my RetroPie:

    “You can’t access this shared folder because your organization’s security policies block unauthenticated guest access”

    My company block devices connecting to shares that don’t require authentication (which is good!). Therefore, to allow me to connect to the shares created by RetroPie using my Windows PC, I needed to re-configure Samba on the RetroPie to require authentication. I did this using the following steps:

    1. SSH’d into my RetroPie, using the command ssh pi@192.168.1.206 (the IP address of my RetroPie)
    2. Took a backup of the Samba configuration (incase it all went horribly wrong!) – sudo cp /etc/samba/smb.conf /etc/samba/smb.conf-retropie
    3. Edited the smb.conf using the Nano text editor – sudo nano /etc/samba/smb.conf

    Made the following changes:

    • Changed map to guest from bad user to never
    • Changed guest ok from yes to no for each of the four shares created by RetroPie (roms, bios, configs and splashscreens)
    1. Saved the file by pressing CTRL + X, then selecting Y (to confirm changes) and pressing Enter to confirm the filename (which default to it’s current name)
    2. Ran sudo smbpasswd -a pi and to create a password for the pi user account, which I will be using to connect to the share
    1. Restarted Samba using the command: sudo service smbd restart

    I then attempted to connect to the RetroPie using it’s IP address (192.168.1.206)

    …and was presented with the following, where I selected Use a different account

    I then entered the credentials for the pi account (using the password I assigned in step 5 above) and hit OK.

    Success! I now have access to the RetroPie’s shares 😀.

    Now for the fun of copying 50GB of data to the RetroPie over WiFi 🤦‍♂️.

  • Getting my computer to play Super Mario Land for me!

    I’ve previously spoken about my love of retro gaming, in particular the Nintendo Gameboy. For a long time, I’ve wanted to try and automate playing a game using PyAutoGUI 🎮.

    Firstly……..what is PyAutoGUI?

    PyAutoGUI lets your Python scripts control the mouse and keyboard to automate interactions with other applications. The API is designed to be simple. PyAutoGUI works on Windows, macOS, and Linux, and runs on Python 2 and 3.

    Taken from https://pyautogui.readthedocs.io/en/latest

    When I was learning Python, the book Automate the Boring Stuff with Python was an invaluable resource, it devotes a whole chapter to PyAutoGUI (which the author created). I’ve previously automated time tracking for work and some other equally exciting tasks……now was time to take this to the next level and attempt to use it to play a game 🕹️.

    Super Mario Land is one of my all-time favourite games and I’ve spent hours over the years playing this game. My aim was to attempt to write a Python script that uses PyAutoGUI to complete World 1-1 without losing a life. My plan was to run the game using an emulator on my PC and use PyAutoGUI to send key presses to the emulator to replicate me playing the game.

    Rather than having some fancy Artificial Intelligence solution such as this which was used to teach a computer how to play Atari 2600 games, I opted for the human touch……I would manually specify the keypresses, based on the countless hours that I’ve *invested* in this game!

    I used the emulator Visual Boy Advance, I have about 7 copies of Super Mario Land I’ve acquired over the years 😆, so had no guilt in using this with a ROM I had acquired 🕵️.

    I configured Visual Boy Advance use the keyboard for input, with the following configuration:

    I then spent far too much time using my trial-and-error approach to completing World 1-1, below is a snippet of the Python script I created to give you an idea – time.sleep() was my friend!

    Below is a video of my automated playthrough in action.

    Here is the final Python script (in all its un-commented glory).

    If you plan to use this, the only thing you’ll likely need to change is the values for pyautogui.click(), this selects the correct window running Visual Boy Advance using the screen coordinates (it’s all covered in the PyAutoGUI documentation here).

  • Keeping a Pi-hole Docker container up to date

    I previously shared my experience of setting up Pi-hole within a Docker container running on a Raspberry Pi. One thing I didn’t think about was managing updates to the container image.

    In the two months that I’ve had Pi-hole up and running, the Docker image has been updated twice. I put together the following script that automates the process of deleting the container and image, and then rebuilding using the latest available image, which I run every time a new image is released 🤖.

    Configuration and logs are saved as Pi-hole stores these on the host system rather than directly within the container itself, therefore no need to worry about losing these between updates.

    Just make sure you have secondary DNS setup within your network otherwise when the Pi-hole container is stopped DNS resolution may fail.

    docker container stop pihole
    docker container rm pihole
    docker rmi pihole/pihole
    docker-compose up -d
    
  • Installing and Updating PowerShell Core on Windows using winget

    This is more of a note for my future self than anything that is earth shattering!

    Windows Terminal (which I 💗), recently notified me that I needed to update PowerShell Core. I could have clicked the link and downloaded and installed the updated MSI, however I’m lazy and wanted a quicker way to do this 🏃‍♂️.

    It turns out that PowerShell Core can easily be installed and updated on Windows using winget – what is winget you may ask?!?

    The winget command line tool enables you to discover, install, upgrade, remove and configure applications on Windows 10 and Windows 11 computers. This tool is the client interface to the Windows Package Manager service.

    Installing PowerShell Core using winget

    winget install --id Microsoft.Powershell --source winget
    

    Upgrading PowerShell Core using winget

    winget upgrade --id Microsoft.Powershell --source winget
    

    I force the source to winget rather than msstore as there are some limitations with the version of PowerShell Core available from the Microsoft Store (msstore), which are documented here (excerpt from the documentation below).

  • Dockerizing a PowerShell Script

    As I mentioned in my previous post, I’m currently in the process of consolidating the array of Raspberry Pi’s I have running around my house by migrating the various workloads running on them to Docker containers running on a single Raspberry Pi 4 that I have.

    After my exploits migrating Pi-hole to Docker (which was far simpler than I anticipated!), next up was migrating a PowerShell script that I run every 5 minutes, which checks the speed of my Internet connection using the Speedtest CLI (which is written in Python) and writes the results to a CSV file.

    Why do I do this? Check out “The Joys of Unreliable Internet” which explains more!

    To Dockerize the script, I needed to find a container image that runs PowerShell which I could install Python on that supported ARM (the CPU that the Pi uses) – it seemed easier doing this than using a Linux image running Python and installing PowerShell. Fortunately, I found this image which was perfect for my needs, the GitHub repo for this image can be found here.

    I also needed a way to store the CSV file that the script writes its output to on the host machine (rather than the container itself), this was to ensure that this persisted, and I didn’t lose any logging data. I decided to use Docker Compose to create the container as this provides a straightforward way to expose a directory on the host machine directly to the container.

    Here is my end solution in all it’s glory!

    Firstly is the Dockerfile, which pulls this image, installs Python, creates a new directory “/speedtest” and then copies the SpeedTest.ps1 PowerShell script (which you can find here) into this directory and sets this to run on container startup.

    You may be wondering why I’m changing the shell (using SHELL), I needed to do this to change the shell from PowerShell so that I could install Python, I then flip back to PowerShell to run the script. I also needed to run “update-ca-certificates –fresh” as I was experiencing some certificate errors that were causing the SpeedTest.ps1 script to fail.

    Dockerfile

    FROM clowa/powershell-core:latest
    
    SHELL ["/bin/sh", "-c"]
    RUN apt-get update -y
    RUN apt-get install -y python3
    RUN apt-get install -y python3-pip
    RUN pip3 install speedtest-cli
    RUN update-ca-certificates --fresh
    
    SHELL ["pwsh", "-command"]
    RUN mkdir speedtest
    COPY ./SpeedTest.ps1 /speedtest/SpeedTest.ps1
    WORKDIR /speedtest
    ENTRYPOINT ["pwsh"]
    CMD ["SpeedTest.ps1"]
    

    To map a directory on the host machine to the container, I used Docker Compose (rather than using a Dockerfile as this approach was simpler). Below is the docker-compose.yml file that I created.

    This names the container runner and maps “/home/pi/speedtest/logs on the host machine to “/etc/speedtest/logs” within the container. It also configures the container to restart should the SpeedTest.ps1 script exit using the “restart: unless-stopped” restart policy.

    docker-compose.yml

    services:
      runner:
        build: ./
        volumes:
          - /home/pi/speedtest/logs:/etc/speedtest/logs
        restart: unless-stopped
    

    Finally, here is the SpeedTest.ps1 script, which executes the speedtest-cli Python script and writes the output to “/etc/speedtest/logs/SpeedTest.csv” within the container, which is mapped to “/home/pi/speedtest/logs/SpeedTest.csv” on the host machine.

    SpeedTest.ps1

    $i = 0
    while ($i -eq 0)
    {
        $Time = Get-Date
        $SpeedTest = speedtest-cli --simple
        $Time.ToString() + "," + $SpeedTest[0].split(" ")[1] + "," + $SpeedTest[1].split(" ")[1] + "," + $SpeedTest[2].split(" ")[1]  >> "/etc/speedtest/logs/SpeedTest.csv"
        Start-Sleep -Seconds 300
    }
    

    To get this container up and running I created a directory on the host machine “/home/pi/speedtest” and placed the three files within this directory:

    • SpeedTest.ps1
    • Dockerfile
    • docker-compose.yml

    I then executed “docker-compose up -d” from within the “/home/pi/speedtest” directory to build, create and start the container, -d runs the container in detached (background) mode rather than interactively.

    I then waited a while and checked the SpeedTest.csv log file within “/home/pi/speedtest/logs” to confirm that the script was running!

    Result….now on to my next Dockerization project!

    The three files used to create this container can be found on GitHub here.

  • Adventures in running Pi-hole within a Docker container on a Raspberry Pi

    Pi-hole is a DNS sinkhole that protects devices from unwanted content, without installing any client-side software. I’ve run Pi-hole on a Raspberry Pi Zero for the last year or so and have found it easy to use (it’s literally set and forget) and super effective. I have a proliferation of Pi’s around my house and wanted to consolidate them by migrating the various workloads running on them to Docker containers running on a single Raspberry Pi 4 that I have.

    I decided to start with migrating Pi-hole from my Pi Zero to a Docker container running on my Pi 4, I chose to do this first as there is a pre-built Pi-hole image for Docker and fantastic documentation.

    The first step was to build the Pi 4, I used the Raspberry Pi Imager tool to prepare an SD card with Raspberry Pi OS (formally known as Raspbian). As I’m running this headless, I used the advanced options within the tool to configure the hostname of the device, enable SSH, set the locale and configure a password – it saved the hassle of plugging in a keyboard and monitor and doing this manually post-install.

    Once my Pi 4 was running I connected over SSH (which you can do in Windows by running ssh pi@hostname) and enabled VNC via raspi-config which also gives me GUI access to the Pi.

    I then needed to install Docker and Docker Compose, I previously posted about how to do this here. Here are the commands I ran on the Pi to do this:

    sudo apt-get update && sudo apt-get upgrade
    curl -sSL https://get.docker.com | sh
    sudo usermod -aG docker ${USER}
    sudo pip3 install docker-compose
    sudo systemctl enable docker
    

    Once this had completed, which took less than 5 minutes, I rebooted the Pi (sudo reboot from a terminal).

    Now I had the Pi up and running along with Docker, I could create the Pi-hole container. To do this I took the example Docker Compose YAML file and edited it to meet my requirements – saving this as docker-compose.yml:

    • Run in host mode – by specifying network_mode: “host”. This setting is described here, it means that the container will share an IP address with the host machine (in my case the Pi 4). I used this to keep things simple, I may regret this decision at a later date 🤦‍♂️.
    • Configure the hostname – using container_name. I actually kept this as the default setting of pihole.
    • Set the timezone – setting this to EUROPE/LONDON, using this article to determine the correct value 🌍.
    • Specify a password for the web admin interface This is configured with WEBPASSWORD, I used a password slightly more complex than “password” 🔒.

    A copy of the docker-compose.yml file I created can also be found here.

    version: "3"
    
    # More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
    services:
      pihole:
        container_name: pihole
        image: pihole/pihole:latest
        # For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
        network_mode: "host"
        environment:
          TZ: 'Europe/London'
          WEBPASSWORD: 'password'
        # Volumes store your data between container upgrades
        volumes:
          - './etc-pihole:/etc/pihole'
          - './etc-dnsmasq.d:/etc/dnsmasq.d'    
        restart: unless-stopped
    

    I then created a directory within /home/pi named “pihole”, copied the docker-compose.yml file into this and then ran the following command from within this directory to build and run the container:

    docker-compose up -d
    

    Within a few minutes I had a shiny new Pi-hole container up and running!

    Next step was to update the DHCP settings on my router to use Pi-hole as the default DNS server it provides to devices on my network. I did this by specifying the IP address of the Pi 4 as the preferred DNS server for DHCP clients, I obtained the IP address of the Pi by running ifconfig from a terminal (I know I should really be using a static IP Address on the Pi 😉). I won’t cover how I updated my router, due to the multitude of different routers out there. I then ran ipconfig /release and ipconfig /renew on my Windows machine to refresh the DNS settings, my other devices will pick up the new settings when they renew their DHCP lease, which is daily.

    I then browsed to the web interface using http://hostname/admin – in my case http://pi4.local/admin hit the login button and authenticated using the password I’d specified in the docker-compose.yml file.

    The Pi-hole container had been running for around an hour, with minimal web activity (as I was writing this post) when I took this screenshot – it’s staggering the number of queries that it had blocked 😲.