Configuring RetroPie Samba shares to require authentication ๐Ÿ”’

As a HUGE retro gaming fan ๐Ÿ•น๏ธ, I absolutely adore RetroPie which turns my Raspberry Pi 4 into an emulation powerhouse ๐Ÿ‘พ! Here’s some blurb from their official site that explains more:

RetroPie allows you to turn your Raspberry Pi, ODroid C1/C2, or PC into a retro-gaming machine. It builds upon Raspbian, EmulationStation, RetroArch and many other projects to enable you to play your favourite Arcade, home-console, and classic PC games with the minimum set-up. For power users it also provides a large variety of configuration tools to customise the system as you want.

RetroPie sits on top of a full OS, you can install it on an existing Raspbian, or start with the RetroPie image and add additional software later. It’s up to you.

One of the methods to copy data to RetroPie (for example ROMs and BIOS files) is to connect using SMB, RetroPie comes pre-configured with Samba which is a Linux re-implementation of SMB.

On Windows, it’s as simple as opening \\RETROPIE or \\IP Address of RetroPie to connect to RetroPie and copy files across.

One issue and slight concern I have is that in its default configuration, the shares created by RetroPie are available without authentication. I first realised this when I saw the following error on my Windows PC when trying to connect to my RetroPie:

“You can’t access this shared folder because your organization’s security policies block unauthenticated guest access”

My company block devices connecting to shares that don’t require authentication (which is good!). Therefore, to allow me to connect to the shares created by RetroPie using my Windows PC, I needed to re-configure Samba on the RetroPie to require authentication. I did this using the following steps:

  1. SSH’d into my RetroPie, using the command ssh pi@192.168.1.206 (the IP address of my RetroPie)
  2. Took a backup of the Samba configuration (incase it all went horribly wrong!) – sudo cp /etc/samba/smb.conf /etc/samba/smb.conf-retropie
  3. Edited the smb.conf using the Nano text editor – sudo nano /etc/samba/smb.conf

Made the following changes:

  • Changed map to guest from bad user to never
  • Changed guest ok from yes to no for each of the four shares created by RetroPie (roms, bios, configs and splashscreens)
  1. Saved the file by pressing CTRL + X, then selecting Y (to confirm changes) and pressing Enter to confirm the filename (which default to it’s current name)
  2. Ran sudo smbpasswd -a pi and to create a password for the pi user account, which I will be using to connect to the share
  1. Restarted Samba using the command: sudo service smbd restart

I then attempted to connect to the RetroPie using it’s IP address (192.168.1.206)

…and was presented with the following, where I selected Use a different account

I then entered the credentials for the pi account (using the password I assigned in step 5 above) and hit OK.

Success! I now have access to the RetroPie’s shares ๐Ÿ˜€.

Now for the fun of copying 50GB of data to the RetroPie over WiFi ๐Ÿคฆโ€โ™‚๏ธ.

Getting my computer to play Super Mario Land for me!

I’ve previously spoken about my love of retro gaming, in particular the Nintendo Gameboy. For a long time, I’ve wanted to try and automate playing a game using PyAutoGUI ๐ŸŽฎ.

Firstly……..what is PyAutoGUI?

PyAutoGUI lets your Python scripts control the mouse and keyboard to automate interactions with other applications. The API is designed to be simple. PyAutoGUI works on Windows, macOS, and Linux, and runs on Python 2 and 3.

Taken from https://pyautogui.readthedocs.io/en/latest

When I was learning Python, the book Automate the Boring Stuff with Python was an invaluable resource, it devotes a whole chapter to PyAutoGUI (which the author created). I’ve previously automated time tracking for work and some other equally exciting tasks……now was time to take this to the next level and attempt to use it to play a game ๐Ÿ•น๏ธ.

Super Mario Land is one of my all-time favourite games and I’ve spent hours over the years playing this game. My aim was to attempt to write a Python script that uses PyAutoGUI to complete World 1-1 without losing a life. My plan was to run the game using an emulator on my PC and use PyAutoGUI to send key presses to the emulator to replicate me playing the game.

Rather than having some fancy Artificial Intelligence solution such as this which was used to teach a computer how to play Atari 2600 games, I opted for the human touch……I would manually specify the keypresses, based on the countless hours that I’ve *invested* in this game!

I used the emulator Visual Boy Advance, I have about 7 copies of Super Mario Land I’ve acquired over the years ๐Ÿ˜†, so had no guilt in using this with a ROM I had acquired ๐Ÿ•ต๏ธ.

I configured Visual Boy Advance use the keyboard for input, with the following configuration:

I then spent far too much time using my trial-and-error approach to completing World 1-1, below is a snippet of the Python script I created to give you an idea – time.sleep() was my friend!

Below is a video of my automated playthrough in action.

Here is the final Python script (in all its un-commented glory).

If you plan to use this, the only thing you’ll likely need to change is the values for pyautogui.click(), this selects the correct window running Visual Boy Advance using the screen coordinates (it’s all covered in the PyAutoGUI documentation here).

Keeping a Pi-hole Docker container up to date

I previously shared my experience of setting up Pi-hole within a Docker container running on a Raspberry Pi. One thing I didn’t think about was managing updates to the container image.

In the two months that I’ve had Pi-hole up and running, the Docker image has been updated twice. I put together the following script that automates the process of deleting the container and image, and then rebuilding using the latest available image, which I run every time a new image is released ๐Ÿค–.

Configuration and logs are saved as Pi-hole stores these on the host system rather than directly within the container itself, therefore no need to worry about losing these between updates.

Just make sure you have secondary DNS setup within your network otherwise when the Pi-hole container is stopped DNS resolution may fail.

docker container stop pihole
docker container rm pihole
docker rmi pihole/pihole
docker-compose up -d

Installing and Update PowerShell Core on Windows using

This is more of a note for my future self than anything that is earth shattering!

Windows Terminal (which I ๐Ÿ’—), recently notified me that I needed to update PowerShell Core. I could have clicked the link and downloaded and installed the updated MSI, however I’m lazy and wanted a quicker way to do this ๐Ÿƒโ€โ™‚๏ธ.

It turns out that PowerShell Core can easily be installed and updated on Windows using winget – what is winget you may ask?!?

The winget command line tool enables you to discover, install, upgrade, remove and configure applications on Windows 10 and Windows 11 computers. This tool is the client interface to the Windows Package Manager service.

Installing PowerShell Core using winget

winget install --id Microsoft.Powershell --source winget

Upgrading PowerShell Core using winget

winget upgrade --id Microsoft.Powershell --source winget

I force the source to winget rather than msstore as there are some limitations with the version of PowerShell Core available from the Microsoft Store (msstore), which are documented here (excerpt from the documentation below).

Dockerizing a PowerShell Script

As I mentioned in my previous post, I’m currently in the process of consolidating the array of Raspberry Pi’s I have running around my house by migrating the various workloads running on them to Docker containers running on a single Raspberry Pi 4 that I have.

After my exploits migrating Pi-hole to Docker (which was far simpler than I anticipated!), next up was migrating a PowerShell script that I run every 5 minutes, which checks the speed of my Internet connection using the Speedtest CLI (which is written in Python) and writes the results to a CSV file.

Why do I do this? Check out “The Joys of Unreliable Internet” which explains more!

To Dockerize the script, I needed to find a container image that runs PowerShell which I could install Python on that supported ARM (the CPU that the Pi uses) – it seemed easier doing this than using a Linux image running Python and installing PowerShell. Fortunately, I found this image which was perfect for my needs, the GitHub repo for this image can be found here.

I also needed a way to store the CSV file that the script writes its output to on the host machine (rather than the container itself), this was to ensure that this persisted, and I didn’t lose any logging data. I decided to use Docker Compose to create the container as this provides a straightforward way to expose a directory on the host machine directly to the container.

Here is my end solution in all it’s glory!

Firstly is the Dockerfile, which pulls this image, installs Python, creates a new directory “/speedtest” and then copies the SpeedTest.ps1 PowerShell script (which you can find here) into this directory and sets this to run on container startup.

You may be wondering why I’m changing the shell (using SHELL), I needed to do this to change the shell from PowerShell so that I could install Python, I then flip back to PowerShell to run the script. I also needed to run “update-ca-certificates –fresh” as I was experiencing some certificate errors that were causing the SpeedTest.ps1 script to fail.

Dockerfile

FROM clowa/powershell-core:latest

SHELL ["/bin/sh", "-c"]
RUN apt-get update -y
RUN apt-get install -y python3
RUN apt-get install -y python3-pip
RUN pip3 install speedtest-cli
RUN update-ca-certificates --fresh

SHELL ["pwsh", "-command"]
RUN mkdir speedtest
COPY ./SpeedTest.ps1 /speedtest/SpeedTest.ps1
WORKDIR /speedtest
ENTRYPOINT ["pwsh"]
CMD ["SpeedTest.ps1"]

To map a directory on the host machine to the container, I used Docker Compose (rather than using a Dockerfile as this approach was simpler). Below is the docker-compose.yml file that I created.

This names the container runner and maps “/home/pi/speedtest/logs on the host machine to “/etc/speedtest/logs” within the container. It also configures the container to restart should the SpeedTest.ps1 script exit using the “restart: unless-stopped” restart policy.

docker-compose.yml

services:
  runner:
    build: ./
    volumes:
      - /home/pi/speedtest/logs:/etc/speedtest/logs
    restart: unless-stopped

Finally, here is the SpeedTest.ps1 script, which executes the speedtest-cli Python script and writes the output to “/etc/speedtest/logs/SpeedTest.csv” within the container, which is mapped to “/home/pi/speedtest/logs/SpeedTest.csv” on the host machine.

SpeedTest.ps1

$i = 0
while ($i -eq 0)
{
    $Time = Get-Date
    $SpeedTest = speedtest-cli --simple
    $Time.ToString() + "," + $SpeedTest[0].split(" ")[1] + "," + $SpeedTest[1].split(" ")[1] + "," + $SpeedTest[2].split(" ")[1]  >> "/etc/speedtest/logs/SpeedTest.csv"
    Start-Sleep -Seconds 300
}

To get this container up and running I created a directory on the host machine “/home/pi/speedtest” and placed the three files within this directory:

  • SpeedTest.ps1
  • Dockerfile
  • docker-compose.yml

I then executed “docker-compose up -d” from within the “/home/pi/speedtest” directory to build, create and start the container, -d runs the container in detached (background) mode rather than interactively.

I then waited a while and checked the SpeedTest.csv log file within “/home/pi/speedtest/logs” to confirm that the script was running!

Result….now on to my next Dockerization project!

The three files used to create this container can be found on GitHub here.

Adventures in running Pi-hole within a Docker container on a Raspberry Pi

Pi-hole is a DNS sinkhole that protects devices from unwanted content, without installing any client-side software. I’ve run Pi-hole on a Raspberry Pi Zero for the last year or so and have found it easy to use (it’s literally set and forget) and super effective. I have a proliferation of Pi’s around my house and wanted to consolidate them by migrating the various workloads running on them to Docker containers running on a single Raspberry Pi 4 that I have.

I decided to start with migrating Pi-hole from my Pi Zero to a Docker container running on my Pi 4, I chose to do this first as there is a pre-built Pi-hole image for Docker and fantastic documentation.

The first step was to build the Pi 4, I used the Raspberry Pi Imager tool to prepare an SD card with Raspberry Pi OS (formally known as Raspbian). As I’m running this headless, I used the advanced options within the tool to configure the hostname of the device, enable SSH, set the locale and configure a password – it saved the hassle of plugging in a keyboard and monitor and doing this manually post-install.

Once my Pi 4 was running I connected over SSH (which you can do in Windows by running ssh pi@hostname) and enabled VNC via raspi-config which also gives me GUI access to the Pi.

I then needed to install Docker and Docker Compose, I previously posted about how to do this here. Here are the commands I ran on the Pi to do this:

sudo apt-get update && sudo apt-get upgrade
curl -sSL https://get.docker.com | sh
sudo usermod -aG docker ${USER}
sudo pip3 install docker-compose
sudo systemctl enable docker

Once this had completed, which took less than 5 minutes, I rebooted the Pi (sudo reboot from a terminal).

Now I had the Pi up and running along with Docker, I could create the Pi-hole container. To do this I took the example Docker Compose YAML file and edited it to meet my requirements – saving this as docker-compose.yml:

  • Run in host mode – by specifying network_mode: “host”. This setting is described here, it means that the container will share an IP address with the host machine (in my case the Pi 4). I used this to keep things simple, I may regret this decision at a later date ๐Ÿคฆโ€โ™‚๏ธ.
  • Configure the hostname – using container_name. I actually kept this as the default setting of pihole.
  • Set the timezone – setting this to EUROPE/LONDON, using this article to determine the correct value ๐ŸŒ.
  • Specify a password for the web admin interface This is configured with WEBPASSWORD, I used a password slightly more complex than “password” ๐Ÿ”’.

A copy of the docker-compose.yml file I created can also be found here.

version: "3"

# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
services:
  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    # For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
    network_mode: "host"
    environment:
      TZ: 'Europe/London'
      WEBPASSWORD: 'password'
    # Volumes store your data between container upgrades
    volumes:
      - './etc-pihole:/etc/pihole'
      - './etc-dnsmasq.d:/etc/dnsmasq.d'    
    restart: unless-stopped

I then created a directory within /home/pi named “pihole”, copied the docker-compose.yml file into this and then ran the following command from within this directory to build and run the container:

docker-compose up -d

Within a few minutes I had a shiny new Pi-hole container up and running!

Next step was to update the DHCP settings on my router to use Pi-hole as the default DNS server it provides to devices on my network. I did this by specifying the IP address of the Pi 4 as the preferred DNS server for DHCP clients, I obtained the IP address of the Pi by running ifconfig from a terminal (I know I should really be using a static IP Address on the Pi ๐Ÿ˜‰). I won’t cover how I updated my router, due to the multitude of different routers out there. I then ran ipconfig /release and ipconfig /renew on my Windows machine to refresh the DNS settings, my other devices will pick up the new settings when they renew their DHCP lease, which is daily.

I then browsed to the web interface using http://hostname/admin – in my case http://pi4.local/admin hit the login button and authenticated using the password I’d specified in the docker-compose.yml file.

The Pi-hole container had been running for around an hour, with minimal web activity (as I was writing this post) when I took this screenshot – it’s staggering the number of queries that it had blocked ๐Ÿ˜ฒ.

Why is my Python code so slow?!?

I begrudgingly started using Python when I got my first Raspberry Pi (as PowerShell wasn’t available for the Pi at that point). As a non-developer my development experience was limited to writing PowerShell, with a sprinkling of C#.

The learning curve wasn’t too bad, I found Automate the Boring Stuff with Python an invaluable resource, along with the associated course on Udemy. I have grown to love Python over the last few years, it’s so flexible and there are a ton of resources out there to help when you get stuck (as I invariably do!) ๐Ÿ˜•.

The one thing I love about working in tech is the constant learning – every day is a school day! I learnt last week how to profile code in Python using a simple one-line command courtesy of @Lee_Holmes:

python -m cProfile -s cumtime 'C:\Profiling.py'

This command inspects a script (in the example above Profiling.py) and outputs details of the cumulative execution time of each function within the script. This is super-useful when trying to pinpoint why a script is running so slowly ๐ŸŒ.

I wrote the script below (Profiling.py) which contains three functions which are each called once, each of these functions includes a sleep to simulate their execution time – with function 1 being the quickest and function 3 the slowest. I then ran the profiler on this to see what it reported ๐Ÿ”.

You can see in the output below that the profiler reports the cumulative execution time of each function – the script only calls each function once, so this time matches the sleep I included within the function. In the real-world where each function is likely to get called multiple times, you’d have a better view of where time is being spent in execution and identify opportunities to tweak specific functions to reduce the overall execution time.

I’m sure I’ll be putting this newfound knowledge to good use! ๐Ÿ˜€.

Continuous Integration for Nintendo Game Boy development using GitHub Actions

I previously posted about writing Hello World for the Nintendo Game Boy using the GBDK.

I’ve been meaning to spend some quality time with GitHub Actions and found the perfect excuse – doing Continuous Integration for Game Boy development ๐Ÿ˜€. I bet I can count the people that are interested in this topic in the entire world on one hand! In any case it was a great excuse to learn GitHub Actions ๐Ÿค–.

After much trial and error, although admittedly a lot less than I thought! I ended up with the below GitHub Actions workflow, this contains a single job with multiple steps that does the following:

  • Triggers when code is pushed to the main branch of the repo
  • Checks out the code in the repo
  • Downloads a copy of GBDK from https://github.com/gbdk-2020/gbdk-2020/releases/latest/download/gbdk-win.zip and extracts the contents to the Windows based runner for the Action (this is required to compile the code)
  • Uses lcc.exe (from the GBDK) to build the C source file (main.c) into a Game Boy ROM (helloworld.gb)
  • Creates a release using a random number (generated using PowerShell) for the tag and release name
  • Uploads the helloworld.gb ROM to the release
name: Game Boy CI
# Controls when the workflow will run
on:
  # Triggers the workflow on push
  push:
    branches: [ main ]
  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:
jobs:
  # This workflow contains a single job called "build"
  build:
    # The type of runner that the job will run on
    runs-on: windows-latest
    steps:
      # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
      - uses: actions/checkout@v2
      - name: Compile C source code into .GB ROM file
        run: |
          Invoke-WebRequest -Uri https://github.com/gbdk-2020/gbdk-2020/releases/latest/download/gbdk-win.zip -Outfile GBDK.zip
          Expand-Archive GBDK.zip -DestinationPath GBDK
          ./"GBDK\gbdk\bin\lcc.exe" -o helloworld.gb main.c
          echo "RANDOM=$(Get-Random)" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
      - name: Create Release
        id: create_release
        uses: actions/create-release@v1
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # This token is provided by Actions, you do not need to create your own token
        with:
          tag_name: ${{env.RANDOM}}
          release_name: Release ${{env.RANDOM}}
          draft: false
          prerelease: false
      - name: Upload Release Asset
        id: upload-release-asset 
        uses: actions/upload-release-asset@v1
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps 
          asset_path: helloworld.gb
          asset_name: helloworld.gb
          asset_content_type: application/octet-stream

Here’s the Action in “action”

…and here is the result, a freshly compiled GB ROM.

Here is a direct link to the repo, if you’d like a closer look.

Writing “Hello World” for a Nintendo Game Boy!

I am a huge retro-gaming geek, I love reliving my childhood and playing classics such as Super Mario Land and Donkey Kong. The Nintendo Game Boy was (and still is!) one of my favourite systems. I’ve always stayed clear of doing any development on retro-gaming systems from the 80s/90s though as this typically involves writing code in assembly language and the learning curve for a non-dev like me is far too high ๐Ÿง .

I recently discovered the Game Boy Development Kit (GBDK) on GitHub, which allows you to write software for the Nintendo Game Boy (and a few other systems too!) in C.

I’m certainly no expert in C, however I was fairly sure that I could knock up “Hello World” without too much of an issue.

I downloaded the GBDK and extracted this to a folder (no install is required), I then set about writing this masterpiece –

#include <stdio.h>

void main()
{
int counter = 1;

while (counter <=16)
    {
    printf("\nHello World!");
    counter++;
    }
}

This is similar to the BASIC – 10 Print “Hello World”, 20 GOTO 10 that fills the screen with “Hello World” that I first wrote on my Amstrad CPC 6128 back in 1990 ๐Ÿ˜€.

Once I’d saved the file (naming this “helloworld.c“), I then compiled the code using the following command. This creates a ROM for the Game Boy named helloworld.gb

./gbdk\bin\lcc.exe -o helloworld.gb helloworld.c
Once the code had compiled, which literally took a second โฑ๏ธ. I then headed over to https://virtualconsoles.com/online-emulators/gameboy/ which is an online emulator for the Game Boy and loaded up my newly created ROM.

Voila!

12-year-old me would have been amazed!

Inspecting Azure Function logs from the command line ๐Ÿ”Ž

I’ve been playing around with Azure Functions recently. One thing I like about them is how you can code, test and view the logs real-time from within the Azure Portal – below you can see me testing my Burger Tax function!

One thing I was interested in doing is getting access to the logs remotely (directly from my machine rather than going through the Azure Portal). It turns out that you can do this using the Azure Functions Core Tools.

I installed the Core Tools (the Azure CLI is a pre-requisite) and after logging into Azure using az login. I could connect to the logs using this command:

func azure functionapp logstream <FunctionAppName>

In the example below, I connected to an Azure Function App named burgertax26343 (A very catchy name I know!). To over-complicate things I ran this within Ubuntu running on WSL on my Windows 11 device – you can of course run this natively on Windows, macOS and Linux.

I then fired up PowerShell to send a test request to the Azure Function App using Invoke-RestMethod (this example is a Starling Bank webhook, read more about how I’m using this here).

After running this, I flipped back to the other terminal window, where I could see the output – in this case it used Write-Host to confirm that the function had triggered and outputs the name of the merchant (Next in this case).