Creating an Alexa skill to read my bank balance

I recently posted about my experiences with the Starling Bank developer API’s and shared some Python that I’d written to retrieve my bank balance and most recent transaction.

I’ve played around with creating Alexa skills before……my most useful was a skill that told me when my rubbish (garbage) 🗑️ was next going to be collected and which colour bin I needed to put out, a real lifesaver!

I decided to have a go at creating an Alexa skill that leveraged the code I’d written to check my bank balance and retrieve my most recent transaction. It was a success and here it is in action!

Rather than write a step-by-step blog post that describes how to do it, I thought I’d put together a walkthrough video, here it is in all its glory.

The AWS Lambda function used by the Alexa Skill (that I demo’d in the video) can be found on GitHub.

Get access to my bank account using Python….why not?!? 🐍

I recently opened an account with Starling Bank, they are a UK based challenger bank. I had read “Banking On It: How I Disrupted an Industry” by their founder Anne Boden (a fantastic book BTW!) and was really intrigued as to how they would stack up against my current “traditional” bank…..anyway, this isn’t a blog about finance! I noticed that they had an API available, which was the perfect excuse for me to geek out 😀.

It was straightforward to get started, all I had to do was create a developer account and link this to my Starling bank account, which took me less than 5 minutes. Once I’d done this I headed over to the “Getting Started” page within their developer site and create a Personal Access Token (PAT) that could be used to access my account – as I wasn’t planning to create any apps that access anything other than my own personal account, I didn’t need to bother registering an app. When creating a PAT you can specify the scope – for example you may only want the PAT to be able to read your balance (wouldn’t it be nice if you could write to the balance 💵???).

As I wasn’t entirely sure what I was going to do with this newfound access to my bank account, I selected everything.

The API and documentation provided by Starling is superb, there’s even a Sandbox to play around in. I played around with the API for a couple of hours and managed to create some Python code to print my balance and details of my most recent transaction.

I put together the Python code below, which does the following:

  • Imports the required modules – requests is used to make the calls to the API and datetime is used to calculate dates within the get_transactions() function.
  • Stores the Personal Access Token in the PAT variable (breaking every single security best practice in one fell swoop)
  • Defines four functions:
    • get_account() – Retrieves the unique identifier of a bank account, as a user can have more than one account it’s important that the correct account is selected. As I have a single account, I just return what’s at index 0. This function is used by the get_balance() and get_transactions() functions.
    • get_default_category() – This retrieves the default category for an account, this is required for the get_transactions() function. An account can have multiple categories defined; new categories are created when additional spaces are added to an account. A space is akin to a virtual account within an account, for example you could create a space specifically for holiday savings – a really cool feature and something that was new to me.
    • get_balance() Returns the current balance of the account – I used the “effectiveBalance” which considers pending transactions so is more accurate than the alternative “clearedBalance”, which does not.
    • get_transactions()Returns details of the most recent transaction within a specific date range, this takes a single argument (days). Which is the number of days from today to look backwards.
import requests
import datetime
PAT = "insert PAT here"
url = "https://api.starlingbank.com/api/v2/"
headers = {"Authorization": "Bearer " + PAT}
def get_account():
    r = requests.get(url + "accounts", headers=headers)
    return r.json()["accounts"][0]["accountUid"]
def get_default_category():
    r = requests.get(url + "accounts", headers=headers)
    return r.json()["accounts"][0]["defaultCategory"]
def get_balance():
    balance = requests.get(url + "accounts/" + (get_account()) + "/balance", headers=headers)
    print(str("£") + str(balance.json()["effectiveBalance"]["minorUnits"] / 100))
def get_transactions(days):
    datefrom = (datetime.datetime.now()-datetime.timedelta(days=days)).strftime("%Y-%m-%d") + "T00:00:00Z"
    feeditems = requests.get(url + "feed/account/" + (get_account()) + "/category/" + (get_default_category()) + "?changesSince=" + datefrom, headers=headers)
    print(feeditems.json()["feedItems"][0]["source"] + "\n" + feeditems.json()["feedItems"][0]["direction"] + "\n" + feeditems.json()["feedItems"][0]["amount"]["currency"] \
    + ":" + str(feeditems.json()["feedItems"][0]["amount"]["minorUnits"] / 100))

I’ve upload this script to GitHub too.

Here’s the functions in action! The call to get_balance demonstrates how rich I am! For get_transactions() I passed the argument 30, which will return the most recent transaction in the previous 30 days. Including who the transaction was with, the direction (money in/out) and the amount.

As I continue to experiment with the API, I’m tempted to write an Alexa skill that I can use to query my account 🤔.

Going back to the 1980s with a Raspberry Pi Pico

In my continued mission to purchase every device that the Raspberry Pi Foundation releases, I acquired a Raspberry Pi Pico shortly before the holiday season 😀.

Pi Pico

The Pico is a super-cheap (£3.60) microcontroller that is programmable with MicroPython (a scaled down version of Python) and C, you can find out more about it here.

I splashed the cash and spent £11 on a Maker Pi Pico, which is a Pico that is pre-soldered onto a maker board that has an LED indicator for each GPIO pin, 3 programmable pushbuttons, an RGB LED, buzzer, stereo 3.5mm audio jack, micro-SD card slot, ESP-01 socket and 6 Grove ports.

Maker Pi Pico

To program the device using MicroPython you need to install MicroPython on the Pico (full instructions here) and then install the Thonny Python IDE on the device you’d like to do development on (Windows, Mac or Linux), I may add that if you wanted to do development on a Raspberry Pi and you are running Raspberry Pi OS, this step isn’t required as Thonny comes pre-installed. A step-by-step guide on how to use Thonny with a Pico is available here.

The biggest mistake I made was to use a cheap USB cable that didn’t support data, I was scratching my head for a while figuring out why my Windows 11 machine couldn’t see the Pico (it presents itself as a mass storage device) until I tried another cable, which worked like a charm.

After playing around with some of the sample Python scripts for the Maker Pi Pico, I thought I’d go all 1980s and try to re-create a super-basic graphic equalizer. If you don’t have a clue what I’m talking about check out this YouTube video for a demo – https://www.youtube.com/watch?v=GlgVoYH6bPo.

I put together the script below (also available on GitHub here). Which does the following:

  1. Turns off the LEDs attached to the GPIO pins 0-15
  2. Generates a random number between 0-15
  3. Lights up each LED in order e.g. if 4 is the random number generated it will turn on the LED for GPIO 0, 1, 2, 3 and then 4
  4. Turns off each LED in reverse order, therefore 4, 3, 2, 1 and then 0
  5. Repeats step 2

I’m sure that there are far more practical applications for a Pico, but this kept me amused for a short while.

import machine
import utime
import random

for i in range(15):
    machine.Pin(i,machine.Pin.OUT)

for i in range(15):
    machine.Pin(i).value(0)

def lightup(id):
    for i in range(id):           
        machine.Pin(i).value(1)
        utime.sleep(0.01)
        
    for i in reversed(range(id)):           
        machine.Pin(i).value(0)
        utime.sleep(0.01)
        
while True:
    randnum = random.randint(0,15)
    lightup(randnum)
    utime.sleep(0.15)  

Here it is in all its glory:

Sharing my Home Office Setup (November 2021)

This is something that I’ve been meaning to do for a while, a few folks in my team recently shared their home office setup on LinkedIn which motivated me to actually do it.

I’ll update this any time that I make any major changes. It will be interesting for me to look back as my setup continues to evolve. I really wish that I’d documented how my home office setup has evolved over time, my first home office setup was back in 2005 in the heady days of Windows XP!

Running an Amstrad CPC 6128 on Microsoft Edge on Linux on Windows 11!

I’ve recently upgraded my Surface Book 2 to Windows 11, one of the first things I took for a test spin was Windows Subsystem for Linux GUI (WSLg) which provides the ability to run Linux GUI apps directly on Windows!

It took me around 5 minutes to get WSLg up and running (including a reboot) using this step-by-step guide, I opted to use the Ubuntu Linux distro (which is the default). One of the first things that I did was to install Microsoft Edge for Linux, instructions on how to do this can be found here.

One of the cool things about WSLg is that Linux apps that have been installed appear in the Start Menu!

This got me thinking…..could I spin up an emulator for the greatest computer of all time – the Amstrad CPC 6128 (my very first computer) within Edge? It turns out I could using http://crocods.org/web 😀. So here we have an Amstrad CPC 6128 running within Edge on Linux on Windows 11.

Check out my BASIC skills!

If you are interested in finding out how WSLg works under the hood, I’d recommend checking out this blog post.

Raspberry Pi Tip and Tricks

I’ve had a Raspberry Pi since it launched back in 2012, I was that excited when mine arrived that I even Tweeted about it 😀.

Over the years I’ve used them for all kinds of things, ranging from testing my Internet connection, which I blogged about here to playing my favourite video games from the 90s using RetroPie, what better use of a Pi than winding back the years and playing Sonic the Hedgehog and Mario like it’s 1992 again!

I thought I’d share a few of my Tips and Tricks for using a Raspberry Pi.

Running a Headless Raspberry Pi

I run all my Pi’s headless (not attached to a monitor, keyboard, and mouse) I use SSH and VNC to access my various Pi’s over the network which works well. One small annoyance I had was the need to manually configure the Wifi and SSH whenever I setup a new Pi (or re-image an existing Pi – as I tend to break them!), which meant I had to connect the Pi to a monitor and keyboard to perform the initial configuration prior to going headless.

I recently became aware that the Raspberry Pi Imager (a tool that can be used to write OS images to an SD card for the Pi) has a hidden advanced options menu that you can use to configure elements of the OS. All you need to do after launching Raspberry Pi Imager is hit CTRL+SHIFT+X (on Windows) to launch the advanced options menu, whatever you configure here gets applied to Raspberry Pi OS when it’s written to the SD card – neat eh!

In the example below, I did the following:

  • Set the hostname to mypi
  • Enabled SSH and set the password for the default pi user
  • Configured it to connect to a Wifi network (QWERTY in this example)

You can also do other things such as disabling overscan and setting the locale. Once you’ve finished editing the configuration, hit save, then when you write Raspberry Pi OS to the SD card it will pre-configure the OS with the settings specified. This has saved me a ton of time (and fiddling around with cables!). The oly thing I have to do manually now is to configure VNC, although I can do this via SSH using raspi-config.

Exposing a Pi to the Internet

I built a rudimentary surveillance camera for my house using the Pi Camera and this script sample which creates a basic web server and streams footage from the Pi Camera.

I didn’t use this to monitor my house for burglars…..it’s main purpose was to help me keep an eye on my cat 😸. The one problem was that it was only accessible from within my home network, which wasn’t really that useful when I was out and about. I did some research and came across ngrok, which makes it super simple to expose a Raspberry Pi to the Internet without doing anything too risky such as configuring port forwarding on your router. This enabled me to keep tabs on my cat wherever I was in the world (as long as I had an Internet connection).

ngrok support Mac OS, Windows, Linux and FreeBSD and it’s super simple to setup and free to use (with some restrictions), here is a guide on how to expose a local web server to the Internet – it’s literally a single command!

ngrok http 80

Once this command is run, it will provide the external URLs that the local port (80) has been exposed to (by default it will create HTTP and HTTPS endpoint if the command above is run). It’s then as simple as connecting to one of the public URLs which will then route traffic to the port exposed on the Pi.

Below you can see this in action….I’ve obscured the publicly accessible URLs (“Forwarding”) as these contain my public IP address.

There is also a local web management interface that can be accessed locally from the device which allows you to inspect requests, review the configuration and also metrics.

Obviously, this is a great tool for testing and playing around with, it’s definitely not something I’d use in production 😀.

Using PowerShell on the Pi

Yes, you read that correctly – you can run PowerShell on the Pi! As somebody who comes from a Windows background who loves PowerShell I was over the moon when PowerShell went cross-platform. I couldn’t ever imagine a day that PowerShell would be available on the Pi – kudos to whoever pushed for making it a cross-platform tool.

As much as I like Python, I have far more experience with PowerShell and sometimes it’s simpler to run a PowerShell command using my muscle memory than spending time researching how to do the equivalent using Python.

PowerShell is super-simple to install on Raspberry Pi OS, this guide steps through the process. I also create a symbolic link so that I don’t have to type the full path to the pwsh (PowerShell) binary when using it (this is also covered in the guide).

Once you’ve done that, you are good to go:

As a side note, I can also highly recommend Visual Studio Code I write all my Python and PowerShell scripts on the Pi using this.

Querying the Microsoft Graph with Python

One of my colleagues mentioned to me that data from MyAnalytics (which is feature of Viva Insights within Microsoft 365) is now accessible via the Beta endpoint of the Microsoft Graph. If you aren’t familiar, you can find out more about MyAnalytics here.

I was particularly excited as MyAnalytics has a wealth of Microsoft 365 usage data, which it analyzes to provide users with personalised insights based on their work patterns and behaviours, for example:

Clicking Learn more on each card provides additional guidance:

I was really interested to examine the data returned by the Beta Graph endpoint for MyAnalytics. Looking at the documentation, it provides two key pieces of functionality:

Activity statistics returns statistics on the following data points for the previous week (Monday to Sunday) for a user. It’s currently not possible to specify a particular week to query, it will simply return data from the previous week.

  • Calls (Teams)
  • Chats (Teams)
  • Emails (Exchange)
  • Meetings (Exchange)
  • Focus – this is explained here

If I take emails as an example, this returns the following properties:

…and here are the returned properties for meetings:

Productivity and self-improvement are two areas of immense interest to me, using the MyAnalytics data returned from the Graph I could envisage creating some custom reports to track my work patterns over time and then act on this – for example, the data could highlight that I’ve spent more time working outside of working hours recently or that I’m starting to attend more recurring meetings.

As a side note: Outlook settings are used to determine a user’s working hours.

The next step for me was to create a Python Web app (using Flask) to retrieve a subset of this information from the Graph (I always love to overcomplicate things!).

I took the Flask-OAuthlib sample from https://github.com/microsoftgraph/python-sample-auth and tweaked this to my needs, my updated script can be found below and on GitHub.

This script could be tweaked to perform other Graph queries if needed

import uuid
import json
import flask
from flask_oauthlib.client import OAuth
CLIENT_ID = ''
CLIENT_SECRET = ''
REDIRECT_URI = 'http://localhost:5000/login/authorized'
AUTHORITY_URL = 'https://login.microsoftonline.com/organizations'
AUTH_ENDPOINT = '/oauth2/v2.0/authorize'
TOKEN_ENDPOINT = '/oauth2/v2.0/token'
RESOURCE = 'https://graph.microsoft.com/'
API_VERSION = 'beta'
SCOPES = ['Analytics.Read']
APP = flask.Flask(__name__)
APP.secret_key = 'development'
OAUTH = OAuth(APP)
MSGRAPH = OAUTH.remote_app(
    'microsoft', consumer_key=CLIENT_ID, consumer_secret=CLIENT_SECRET,
    request_token_params={'scope': SCOPES},
    base_url=RESOURCE + API_VERSION + '/',
    request_token_url=None, access_token_method='POST',
    access_token_url=AUTHORITY_URL + TOKEN_ENDPOINT,
    authorize_url=AUTHORITY_URL + AUTH_ENDPOINT)
@APP.route('/')
def login():
    """Prompt user to authenticate."""
    flask.session['state'] = str(uuid.uuid4())
    return MSGRAPH.authorize(callback=REDIRECT_URI, state=flask.session['state'])
@APP.route('/login/authorized')
def authorized():
    """Handler for the application's Redirect Uri."""
    if str(flask.session['state']) != str(flask.request.args['state']):
        raise Exception('state returned to redirect URL does not match!')
    response = MSGRAPH.authorized_response()
    flask.session['access_token'] = response['access_token']
    return flask.redirect('/graphcall')
@APP.route('/graphcall')
def graphcall():
    """Confirm user authentication by calling Graph and displaying some data."""
    endpoint = 'me/analytics/activityStatistics'
    headers = {'SdkVersion': 'sample-python-flask',
               'x-client-SKU': 'sample-python-flask',
               'client-request-id': str(uuid.uuid4()),
               'return-client-request-id': 'true'}
    graphdata = MSGRAPH.get(endpoint, headers=headers).data
    data = str(graphdata).replace("'",'"')
    datadict = json.loads(data)
    summary = []
    i = 0
    while i < 5:
        if datadict["value"][i]["activity"] == "Focus":
            i += 1
        else:
            summary.append("Activity Type: " + datadict["value"][i]["activity"] + " / Date: " + datadict["value"][i]["startDate"] + " / After Hours " + datadict["value"][i]["afterHours"])
            i += 1
    return str(summary)  
@MSGRAPH.tokengetter
def get_token():
    """Called by flask_oauthlib.client to retrieve current access token."""
    return (flask.session.get('access_token'), '')
if __name__ == '__main__':
    APP.run()

This script (Flask Web app) does the following:

  • Prompts the user to authenticate to a M365 tenant (and requests access to the ‘Analytics.Read’ and ‘User.Read’ scopes in the Graph)
  • Queries the me/analytics/activityStatistics endpoint
  • Returns the following information for each activity type for the first day in the reporting period (excluding Focus)
    • Date (“startDate”)
    • Activity Type (“activity”)
    • Time spent after hours on the activity (afterHours)

If you take a closer look at the script, you’ll see it takes the raw JSON output from the Graph, converts this to a Python dictionary and then iterates through the first day of the weeks data for each activity type (excluding Focus) and outputs this as a string – it’s certainly not pretty, but this is more of a proof of concept to get me started 😀.

Before running this script, you’ll need to do a few things:

  • Install the pre-requisites (pip install -r requirements.txt)
  • Register an application in Azure AD, here is a walkthrough of how to do this
  • In addition to the above, add the Analytics.Read permission (example below) – this is required to get access to the MyAnalytics data
  • Update the CLIENT_ID and CLIENT_SECRET variables (using the values obtained when registering the app in Azure AD)
  • Run the script using “python app.py”
  • Launch a browser and connect to http://localhost:5000

You should then (hopefully!) see the following:

A sign in page:

Once authenticated, you should see the following screen – which is requesting specific permission to the users data.

Once you’ve clicked Accept, the delightful screen below should be displayed which includes the raw output. The test tenant I used to create the script has no usage hence the important data (After Hours) reports zero, in a real-world scenario this would be a cumulative value in seconds spent on each activity after hours.

I’ll likely write more about this as my experimentation continues…

Tinkering with Azure Anomaly Detector

I’ve fancied having a play around with Azure Anomaly Detector (which is part of Azure Cognitive Services) for some time, I’d never really had a good use-case or excuse to do this…..until now!

I recently created a script for my Raspberry Pi that regularly checks the latency and speed of my Internet connection and writes this to a CSV file. I wrote about this here, my primary reason for doing this was to help me diagnose some random connectivity issues that I was experiencing – although ironically since creating this script my Internet connection has been super-reliable!

It got me thinking that I could take the data collected by this script and run it through Anomaly Detector to automate analysis and identify specific times of the day that my Internet connection speed deviated from the norm, which sounded far more appealing than cracking open Excel and doing this manually 😀.

I put together the PowerShell script below (which is also available on GitHub), this takes the CSV file created by my Internet speed test script, extracts the relevant data, formats this into JSON and submits to Azure Anomaly Detector for analysis, I opted to perform Batch rather than Streaming detection as I didn’t need to analyse the data real-time (now that would be overkill!), the differences between Batch and Streaming detection are explained here. I’m also using the Univariate API as this doesn’t require any experience with ML.

I opted to call the REST endpoint directly using this sample as inspiration, the script does the following:

  • Creates a JSON representation of the data from an input CSV file – “SpeedTestAnomaly.csv” in the format required by Anomaly Detector, an example JSON for reference can be found here. I’ve also uploaded a sample input file to GitHub, I’m only using two values from this input file – the date/time and download speed.
  • Submits this to Anomaly Detector – I’m using the maxAnomalyRatio and sensitivity settings from the sample (0.25 and 95 respectively). I used hourly for granularity as I only test my Internet connection once per hour.
  • Returns the expected and actual results for each test and indicates if the results were flagged as an anomaly (Red = Anomaly, Green = OK)

If you do want to re-use this script, you’ll need to update the $AnomalyURI and $APIKey variables.

$JSON = @"
{ 
    "series": [
    ],
   "maxAnomalyRatio": 0.25,
   "sensitivity": 95,
   "granularity": "hourly"
  }
"@
$NonJSON = $JSON | ConvertFrom-Json

$Output = Get-Content ./SpeedTestAnomaly.csv
Foreach ($Line in $Output)
{
  $DL = $Line.split(",")[2]  
  $Date = $Line.split(",")[0]
  $Add = New-Object -TypeName psobject -Property @{timestamp = $Date;value = $DL}
  $NonJSON.series += $Add
}

$JSON = $NonJSON | ConvertTo-Json

$AnomalyURI = "https://PREFIX.cognitiveservices.azure.com/anomalydetector/v1.0/timeseries/entire/detect"
$APIKey = "KEY"

$Result = Invoke-RestMethod -Method Post -Uri $AnomalyURI -Header @{"Ocp-Apim-Subscription-Key" = $APIKey} -Body $JSON -ContentType "application/json" -ErrorAction Stop

$i = 0
Foreach ($Anomaly in $Result.isAnomaly)
{
  if ($Anomaly -eq "True") 
  {
    Write-Host "Expected Value: " $Result.expectedValues[$i] "Actual Value: " $NonJSON.series[$i] -ForegroundColor Red
  }
  else 
  {
    Write-Host "Expected Value: " $Result.expectedValues[$i] "Actual Value: " $NonJSON.series[$i] -ForegroundColor Green
  }
  
  $i ++
}

Below is an extract from the input file (SpeedTestAnomaly.csv), I’m only using Column A (date/time) and Column C (download speed – mbps)

Below is the output of the script, this details the expected and actual values for each hourly test and highlights those tests that have a result that has been identified as an anomaly (in red), you can see there are three examples where anomalies have been detected in my Internet connection speed over the course of a couple of days.

Face Detection and Analysis using Azure Cognitive Services and a Raspberry Pi

I recently blogged about creating a Mood Detector using Lobe, I wondered what other options were available for face analysis which led to me embarking on a journey of ramping up on Azure Cognitive Services, more specifically the Face Service, which has some really cool capabilities.

I used my trusty Raspberry Pi (with attached camera) and developed a Flask application using Python, however rather than using the Face client library for Python, I opted to go for the REST API so that the code is a little more portable.

I created a Flask app that does the following:

  • Takes a picture
  • Submits this picture to the REST API endpoint for the Face Service
  • Returns the detected age, gender, hair colour and a list of potential emotions (with a score for each) – the Face Service can detect/analyse multiple faces, so I hardcoded it to return the results from the first face detected

An example of the app in action can be found below – the screenshot below is of the results page, as you can there is a reason that I’m not a front-end dev! I was most impressed by the fact that the Face Service thinks that I’m 8 years younger than I actually am 😊. It also correctly detected my emotion (smiling).

The code for this app can be found at – Blog-Samples/Face Analysis at main · brendankarl/Blog-Samples (github.com).

To run this you’ll need:

  • A Raspberry Pi with attached camera (I used a Pi 4, but older models should work too)
  • An Azure subscription with an Azure Cognitive Services resource provisioned
  • Simply copy the code from the GitHub repo, update the url and key variable and execute the command below in a terminal (from the directory where the code is)
sudo python3 FaceAnalysis.py

Below is the FaceAnalysis.py code for reference.

from flask import Flask, render_template
from picamera import PiCamera
from time import sleep
import os
import random
import requests
import json
app = Flask(__name__)
@app.route('/')
def button():
    return render_template("button.html") # Presents a HTML page with a button to take a picture
@app.route('/takepic')
def takepic():
    currentdir = os.getcwd()
    randomnumber = random.randint(1,100) # A random number is created for a query string used when presenting the picture taken, this is to avoid web browser caching of the image.
    camera = PiCamera()
    camera.start_preview()
    sleep(2)
    camera.capture(str(currentdir) + "/static/image.jpg") # Take a pic and store in the static directory used by Flask
    camera.close()
    url = "https://uksouth.api.cognitive.microsoft.com/face/v1.0/detect" # Replace with the Azure Cognitive Services endpoint for the Face API (depends on the region deployed to)
    key = "" # Azure Cogntivie Services key
    image_path = str(currentdir) + "/static/image.jpg"
    image_data = open(image_path, "rb").read()
    headers = {"Ocp-Apim-Subscription-Key" : key,'Content-Type': 'application/octet-stream'}
    params = {
    'returnFaceId': 'false',
    'returnFaceLandmarks': 'false',
    'returnFaceAttributes': 'age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise',
    }
    r = requests.post(url,headers = headers,params = params, data=image_data) # Submit to Azure Cognitive Services Face API
    age = r.json()[0]["faceAttributes"]["age"] # Return the age of the first face
    gender = r.json()[0]["faceAttributes"]["gender"] # Return the gender of the first face
    haircolor = r.json()[0]["faceAttributes"]["hair"]["hairColor"][0]["color"] # Return the hair color of the first face
    emotions = r.json()[0]["faceAttributes"]["emotion"] # Return the emotions of the first face
    return render_template("FaceAnalysis.html",age=age,gender=gender,haircolor=haircolor,emotions=emotions,number=randomnumber) # Pass the results above to FaceAnalysis.html which presents the output and the pic taken to the user
if __name__ == "__main__":
    app.run(port=80,host='0.0.0.0')

Creating a Mood Detector using Lobe and a Raspberry Pi

I’ve recently been experimenting with Lobe, this is a remarkable app that democratizes AI by providing the ability to build a Machine Learning (ML) model in less than ten minutes, the beauty is that this does not require any ML or coding experience. You can find out more about it at Lobe Tour | Machine Learning Made Easy.

I’ve always been really interested in self-improvement and understanding more about myself, one aspect of this that really intrigues me is my mood throughout the workday, this can go from elation to despair, and I’ve never quite figured out what the key drivers are for this (although I do have some ideas).

My love of overcomplicating the simple, led to me developing an application to record my mood throughout the day with my Raspberry Pi 4 and it’s camera. The plan would be for Lobe to analyse my mood using pictures captured by the Pi camera.

The Pi and its camera were already sat on my desk staring at me, so perfectly placed.

I wanted to be able to take a picture of myself using the Pi, have Lobe recognise my mood and log the mood along with date/time, then later I could analyse this data for specific patterns, correlating with my work calendar for additional insight. I wanted to know – is it just me having a case of the Mondays or are there specific times of the day, activities or projects that drive my mood?

To get started I headed over to www.lobe.ai, downloaded the Windows app (it’s also available for Mac) and used this to take some pictures of me in two moods (positive = thumb up / negative = thumb down). I took the pictures using the Webcam attached to my Windows 10 device, I then tagged the images and let Lobe works its magic on training an ML model.

I then selected Use and was able to evaluate the model real-time (with a surprising level of accuracy!). Once I was happy with everything, I exported the model as TensorFlow Lite – which is the best option for a Raspberry Pi.

I then copied the TensorFlow Lite model (which is basically a folder with a bunch of files within) to my Raspberry Pi. The next step was to install Lobe for Python on the Pi by running the following.

wget https://raw.githubusercontent.com/lobe/lobe-python/master/scripts/lobe-rpi-install.sh
sudo ./lobe-rpi-install.sh

Now everything was up and running I used the sample Python script available here to test Lobe with the model that I had just created using some sample images I had, this worked so I moved on to creating a Python based Web application using the Flask Framework.

Here is the finished app in all it’s glory! All I have to do is launch the site click Capture Mood, the Pi camera then takes a pic, runs this through the ML model created using Lobe and confirms the mood detected (along with a button to capture the mood again), in the background it also writes the detected mood, date and time to a CSV file for later analysis.

Below is an example of the CSV output – that ten minutes sure was a real rollercoaster of emotions 😂.

This is obviously quite rudimentary; I need to extend the model to detect additional moods, however it was a useful exercise in getting to grips with Lobe and Flask.

The full solution can be found here (minus the ML model) – to save you a click, below is the Python code (MoodDetector.py):

from time import sleep
from picamera import PiCamera
from lobe import ImageModel
from flask import Flask, redirect, url_for, request, render_template
import csv
import datetime
app = Flask(__name__)
@app.route('/')
def button():
    return render_template('button.html') # Display the capture mood button, when clicked redirect to /capturemood
@app.route('/capturemood') # Take a pic, analyses and writes output to HTML and CSV
def capturemood():
    camera = PiCamera()
    camera.start_preview()
    sleep(2)
    camera.capture('mood.jpg') # Take picture using Raspberry Pi camera
    camera.close()
    model = ImageModel.load("Mood TFLite") # Load the ML model created using Lobe
    result = model.predict_from_file("mood.jpg") # Predict the mood of the mood.jpg pic just taken 
    now = datetime.datetime.now()
    date = now.strftime("%x")
    time = now.strftime("%X")
    moodCSV = open("Mood.csv", "a")
    moodCSVWriter = csv.writer(moodCSV) 
    moodCSVWriter.writerow([date,time,str(result.prediction)]) # Write the date, time and mood prediction to the Mood.csv file
    moodCSV.close()
    #Vary the HTML output depending on whether the prediction is positive or negative.
    if str(result.prediction) == "Negative": 
        return """<div class="buttons"><p>"Mood is Negative"</p>
        <a href='/capturemood'><input type='button' value='Capture Mood'></a></div>"""
    elif str(result.prediction) == "Positive":
        return """<div class="buttons"><p>"Mood is Positive"</p>
        <a href='/capturemood'><input type='button' value='Capture Mood'></a></div>"""
if __name__ == "__main__":
    app.run(port=80,host='0.0.0.0')

…and here is the supporting render template that I created (button.html).

<html>
<body>
<div style='text-align:center'>
    <a href='/capturemood'><input type='button' value='Capture Mood' align='Center'></a>
</div>
</body>
</html>