• Retrieving my electricity usage with the Octopus Energy API using Python 🐙

    I’ve recently changed my energy supplier to Octopus……the main reason being their super-cheap overnight electricity rates, which will save me lots of money charging my EV car 💷.

    I noticed that they had a developer API, ever the tinkerer I thought I’d take a closer look. Their documentation is really extensive, however their examples all used Curl and I wanted to have a play with Python (using the Requests module). I ran into a couple of issues so thought I’d document this to help others (although most likely my future self when I forgot all of this 😂)

    Issue 1 – authenticating to the API using a key

    The API uses HTTP basic auth (which uses a key that is available on the API access page), after much searching I found the equivalent for the -u parameter in Curl to enable me to successfully authenticate using the key. The trick was import HTTPBasicAuth using the following command:

    from requests.auth import HTTPBasicAuth
    

    Then when making the request to the API using the following syntax, which passes the API key (key variable) and the username, with a blank password (denoted by “”).

    request = requests.get(url,auth=HTTPBasicAuth(key,""))
    

    Issue 2 – formatting the date for period_from correctly

    The API allows you to pass it a period_from parameter, this is useful to get your energy consumption from a specific date. In my specific use-case, I wanted to see my consumption from the previous 7 days. I achieved this using the following:

    date7daysago = (datetime.datetime.now() - datetime.timedelta(days=7)).isoformat()
    
    

    Pulling all of this together, I created the script below which connects to the developer API and retrieves my electricity consumption for the previous 7 days (grouped by day) and outputs this to the console – if you’d like to use this you’ll need to update the key, MPAN and serial variables – all of which are listed on this page (if you are a customer of course!)

    import requests
    from requests.auth import HTTPBasicAuth
    import datetime
    date7daysago = (datetime.datetime.now() - datetime.timedelta(days=7)).isoformat() # calculate the date 7 days ago
    key = "KEY"
    MPAN = "MPAN"
    serial = "SERIAL"
    baseurl = "https://api.octopus.energy/v1/"
    url = baseurl + "electricity-meter-points/" + MPAN + "/meters/" + serial + "/consumption" + "?period_from=" + date7daysago + "&group_by=" + "day"
    request = requests.get(url,auth=HTTPBasicAuth(key,""))
    request.json()
    

    Here is the output of the script – you may notice that it doesn’t include 7 days worth of data, that is because I haven’t been a customer for that long.

    To make it a little easier to read I added the following to the script, which prints out the date and consumption:

    consumption = request.json()
    for result in consumption["results"]:
        print(str(result["interval_start"].split("T")[0]) + " : " + str(result["consumption"]))
    

    Based on my previous escapades with developer API’s for consumer services I’m sure that I’ll be writing an Alexa skill for this next 🤖.

  • Unable to set the display resolution on a Raspberry Pi running RetroPie 🎮

    I have a Raspberry Pi 400, which I run RetroPie on – if you are into retro gaming and have a Pi I cannot recommend this enough! I have this attached to a 4K TV using HDMI, I needed to change the resolution as I wanted to reduce this to 1920 x 1080 (as I had some issues with one of the emulators running at 4K). I’d usually change the resolution using raspi-config.

    I ran into a issue with raspi-config, whereby the option to change resolution within Display Options > Resolution was acting weirdly – it was dropping me back to the main menu when selecting Resolution, as RetroPie doesn’t have the Raspberry Pi OS GUI I wasn’t sure what other options I had to change the resolution.

    It turns out this wasn’t as difficult as I thought, I just needed to edit /boot/config.txt, I ran the following command to do this in a terminal (via SSH):

    sudo nano /boot/config.txt
    

    Within this file there was two lines I needed to un-comment to overwrite the current configuration:

    hdmi_group and hdmi_mode. I configured hdmi_group to 1 (which means the device is connected to a TV) and hdmi_mode to 16, which equates to 1080p @ 60hz – a full reference for the various modes can be found here.

    I gave the Pi a reboot and voila, I had glorious 1080p resolution 📺.

  • Creating a Python Web App in Azure using a single command 🤯

    Azure continues to amaze me, I’ve been playing around with Azure Web Apps recently and was astounded at the simplicity of creating a new Web App and deploying code to it 😲.

    Using the one-liner below I was able to create a new Azure Web App (with all of the necessary pre-req’s such as a resource group and App Service plan) AND deploy my code!

    I used the Azure CLI to do this running on my Windows 11 machine, to install the Azure CLI I used the following winget command within PowerShell.

    winget install -e --id Microsoft.AzureCLI
    

    Once this had been installed, I used az login to login to my Azure subscription and then ran the command below to provision the Web App and deploy the code (which is a Python Flask application). This command was run directly from the folder that contained the code for my Web App (which displays the video games within my collection) and did the following:

    • Creates a Resource Group (with an auto-generated name)
    • Creates an App Service Plan (with an auto-generated name)
    • Create a Web App with the name specified by -n (GamesWebApp)
    • All of these resources were created within the UK South region, denoted by -l
    • Uses the free (F1) SKU for Azure App Service
    az webapp up --sku F1 -n "GamesWebApp" -l uksouth
    

    Once the command completed the following was output:

    I browsed to https://gameswebapp.azurewebsites.net to see my Web App in action (as you can see, I’m no front-end dev 😂)

    Just in case you are interested, here are the PS1 games in my collection.

  • Using Python to read data from an Excel file

    I recently catalogued my retro gaming collection and took the high tech approach of using Excel to do this. I then decided to over-engineer this further and store the data in Azure Table storage……however I didn’t want to manually re-key all of the games!

    I took a look at options for how I could automate reading an Excel file using Python – my plan being to write a script that would extract the data from the Excel file and then write this to Azure Table storage.

    My Excel file is pretty basic with a sheet for each of the systems I own, within each sheet is a list of the games for that system:

    Here is how I eventually figured out how to read data from an Excel file using Python…..

    Step 1 – Install the pre-req modules

    For this I needed to install two modules – pandas and openpyxl. I ran the following commands in a terminal to do this:

    pip install pandas
    pip install openpyxl
    

    Step 2 – Create the script!

    I wrote the script below (which took me far longer than I’d like to admit!), which does the following:

    • Obtains the names of each sheet within the Excel file – remember I have a separate sheet for each games console.
    • Loops through each of the sheets, opening each sheet individually.
    • For each sheet, it Iterates through each row in the sheet and prints the row (game).
    excelfilepath = 'C:\\Users\\brend\\OneDrive\\Documents\\Retro Games Collection.xlsx' # Define the path to the Excel file to read
    excel = pd.ExcelFile(excelfilepath)
    sheets = excel.sheet_names # Read the names of each of the sheets within the Excel file
    
    for sheet in sheets: # Loop through each sheet
        print("")
        print(sheet)
        print("----------")
        excel = pd.read_excel(excelfilepath,header = None, sheet_name= sheet) # Open the Excel file and the specific sheet within the sheets list
        i = 0
        while i < len(excel[0]) - 1: # Read the number of rows within the sheet and minus 1 to get the maximum index number for the while loop
            print(excel[0][i]) # Print the row from the specific sheet using the index i
            i += 1
    

    Here is what it looks like in action:

    The next step for me is to update the while loop so that it writes each game to Azure Table storage. I’ve written about how to do that using Python here.

  • Using Python to write data to Azure Table storage

    I’m in the process of writing a Python (Flask) web app that will list all of the games in my retro gaming collection (and also allow me to add/edit them too) 🕹️.

    My list of games are stored within Azure Table storage (I really love to over-engineer things!), so I needed to figure out how to add/query data within Azure Table storage using Python 🐍.

    Step 1 – Install the Python module for Azure Table storage

    First things, first – I needed to install the Python module for Azure Table storage, I did this using the following command from a terminal:

    pip install azure-data-tables
    

    Step 2 – Connecting to the Storage Account

    I then needed to connect to my Azure Storage account, I used the following to do this:

    accountname = "brendgstorage"
    key = "KEY"
    endpoint = "https://brendgstorage.table.core.windows.net"
    credential = AzureNamedKeyCredential(accountname,key)
    

    The key thing is to not specify the name of the table as part of the endpoint URL, when I did it would allow me to add entries to the table, but I was unable to query the table and received a cryptic error (which I wasted a lot of time figuring out).

    To keep things simple I used an access key to connect to the storage account, I copied the key directly from the Azure Portal.

    I also retrieved the endpoint URL from the portal.

    Step 3 – Add an entry to the Azure table “games”

    I firstly needed to connect to the table “games”, I did this with the following commands:

    service = TableServiceClient(endpoint=endpoint, credential=credential)
    gamestable = service.get_table_client("games")
    

    I then defined the game (entity) to add to the table:

    entity = {
        'PartitionKey': '1',
        'RowKey': 'Super Mario Land',
        'System': 'GB',
    }
    

    As this isn’t going to be a large table (<1000 rows), I opted to use a single PartitionKey, the RowKey is the name of the game and I defined a new field named System which is used to define the system that the game is for. In the example above this was Super Mario Land on the Nintendo Game Boy.

    I could then add the game (entity) to the table using the following:

    gamestable.create_entity(entity)
    

    Step 4 – Verify that the game was added

    I then wrote a query to return all games within the table to verify that the game had been successfully added to the table:

    games = gamestable.query_entities(query_filter="PartitionKey eq '1'")
    for game in games:
        print(game["RowKey"])
    

    This outputs the RowKey (game name) for every game listed in Partition 1 – as I only have a single partition this would return everything:

    Step 5 – Querying for all games from a specific system

    Here is an alternative query that lists all games from a specific system.

    system = "GB"
    games = gamestable.query_entities(query_filter="System eq " + "'" + system + "'")
    for game in games:
        print(game["RowKey"])
    

    The name of the system to query is held within the system variable.

    Next step for me is to write a script that takes the Excel file that contains a list of all my games and automagically add them to the table.

    The snippets above can be found on GitHub.

  • Running a web browser within a terminal 🌐

    I stumbled across an interesting project on GitHub recently – Carbonyl is a Chromium based web browser that is built to run in a terminal!

    It can be run either via npm or Docker. I opted to take Carbonyl for a spin using Docker (I run Docker Desktop on my Windows 11 machine). It was super-simple to run using the following command from a terminal.

    docker run --rm -ti fathyb/carbonyl (URL to access)
    

    I used the following command to access this blog:

    docker run --rm -ti fathyb/carbonyl https://brendg.co.uk
    

    Below, you can see a short video of this in action!

    Once finished, hit CTRL+C, which will exit the container and remove it from Docker.

    I have zero use-case for this; however, it is a lot of fun 😀.

  • Calling the OpenAI API with Python 🤖🐍

    The first time I used ChatGPT I was absolutely astounded by this powerful tool and the possibilities seemed endless. In typical fashion, once I found out that an API was available, I decided to have a poke around with it using Python 😀

    I was pleasantly surprised as to the simplicity of calling the API and put together a sample that uses the recently released gpt-3.5-turbo model and provides the ability to fire a question off and see the response from OpenAI within a terminal.

    Step 1 – Obtain OpenAI API key

    After creating an account with OpenAI, your API key can be obtained using this URL – https://platform.openai.com/account/api-keys

    Step 2 – Install the OpenAI Python module

    I launched a terminal and ran the following to install the OpenAI Python module.

    pip install openapi
    

    Step 3 – It’s showtime!

    Here is the script:

    import openai
    # Set the OpenAPI key, replace KEY with your actual key
    openai.api_key = key = "KEY" 
    # Set the model to be used
    engine = "gpt-3.5-turbo"
    # Prompt for a question
    question = input("What's your question?: ")
    # Submit the question, using the default values for everything - https://platform.openai.com/docs/api-reference/completions
    response = openai.ChatCompletion.create(
        model= engine,
        messages=[
            {"role": "user", "content": question},
        ],
    )
    print(response['choices'][0]['message']['content'])
    

    Here is the script in action – I asked it to create a Python script for me to calculate the date 100 days from now and it didn’t let me down 😀.

  • Playing around with document summarization 📃 in Azure Cognitive Services 🧠

    Next up in my personal backlog (yes, I am that sad) was to play around with the document summarization capabilities included within Azure Cognitive Services for Language.

    But what is this, you may ask?

    Document summarization uses natural language processing techniques to generate a summary for documents. Extractive summarization extracts sentences that collectively represent the most important or relevant information within the original content. These features are designed to shorten content that could be considered too long to read – Taken from here.

    I had a quick play around with document summarization (using this code sample for inspiration) and put together the Python script below and available here, which does the following:

    • Takes a string of text and determines how many sentences are in this.
    • Passes this to the document summarization endpoint to summarize. Requesting a summary that includes no more than half of the number of sentences in the original string provided.
    • For example if 6 sentences are passed to the endpoint for summarization, the summary should include no more than 3 sentences.
    • Prints the summarized output.
    import requests
    import json
    import time
    
    text = """The Sega Mega Drive, also known as the Sega Genesis in North America, was a popular video game console that was first released in Japan in 1988. 
    It was Sega's third home console and was designed to compete with Nintendo's popular NES and SNES consoles. 
    The Mega Drive was released in North America in 1989 and quickly gained a strong following among gamers thanks to its impressive graphics, sound quality, and large library of games. 
    Some of the most popular games for the console include Sonic the Hedgehog, Streets of Rage, and Phantasy Star. 
    The Mega Drive remained in production until 1997 and sold over 40 million units worldwide, cementing its place as one of the most beloved video game consoles of all time.""" # Text to be summarized
    
    sentences = len(text.split(".")) / 2 # calculate how many sentences there are to be summarized (from the "text" variable), divide this by 2. Therefore if there are 6 setencnes to be summarised, the total number of sentences included in the summarization will be 3.
    
    url = "https://ENDPOINT.cognitiveservices.azure.com/language/analyze-text/jobs?api-version=2022-10-01-preview" # Replace ENDPOINT with the relevant endpoint
    key = "KEY" # Key for Azure Cognitive Services
    headers = {"Ocp-Apim-Subscription-Key" : key}
    payload = {
      "displayName": "Summarizer",
      "analysisInput": {
        "documents": [
          {
            "id": "1",
            "language": "en",
            "text": text
          }
        ]
      },
      "tasks": [
        {
          "kind": "ExtractiveSummarization",
          "taskName": "Summarizer",
          "parameters": {
            "sentenceCount": sentences
          }
        }
      ]
    }
    
    r = requests. Post(url,headers = headers,data = json.dumps(payload))
    results = r.headers["operation-location"]
    time.sleep(10) # Being super lazy here and putting in a sleep, rather than polling for the results to see when they are available!
    r = requests.get(results,headers = headers)
    for s in r.json()["tasks"]["items"][0]["results"]["documents"][0]["sentences"]:
        print(s["text"])
    

    I used ChatGPT to generate the text I used for testing (on one of my favourite subjects I may add!):

    Here is the summary that it provided – it didn’t do too bad a job, did it?

    I may start using this to summarize some of the super-long work e-mails I receive 😎.

  • Conditional logic and Power Apps buttons 📱

    I’ve been creating a Power App and needed to add some conditional logic to a button, the app I’ve been working on allows users to browse a list of registered mentors, view their profiles and submit a request for a mentoring session with a mentor using a button.

    Within the app, I wanted the button on the mentor profile page that is used to request a session to display “Request {mentor first name} as a mentor“, the challenge I had is that for mentors with longer first names (>9 characters) it was causing the text to wrap and it looked ugly. I decided to add some logic to change the message displayed based on the length of the mentors first name, basically:

    If the mentors first name is <10 characters display “Request {first name} as mentor”, otherwise displays “Request as Mentor”.

    The other small challenge I had is that the mentor’s full name is held in a variable imaginatively named MentorName, therefore I needed to first split their full name so that I could pull out their first name. I achieved this with the split function using space ” ” as a delimiter. It then returns the first item from the resultant table outputted by the Split function (which would be the first name), using the First function.

    This is then wrapped in an If function which uses the Len function to check the number of characters in the first name, if this is less than 10, return the name otherwise return nothing.

    Below you can see an example of this in action, along with the Power Fx code. In this case MentorName = “Harrison Griffin”

    "Request " & (If(Len(First(Split(MentorName," ")).Result)<10,(First(Split(MentorName," ")).Result),""))&" as Mentor"
    

    The second screenshot shows the behaviour with a first name that is greater than 9 characters, in this case MentorName = “Christopher Griffin”

  • Taking a photo 📸 with a Power App and saving to OneDrive ☁️

    I’m in the process of preparing for some Power Platform exams so have been getting hands-on with Power Apps recently.

    I created this video which steps through the process of creating a simple Power App that uses the camera on the device to take a photo and then saves this to OneDrive.