Azure continues to amaze me, I’ve been playing around with Azure Web Apps recently and was astounded at the simplicity of creating a new Web App and deploying code to it ๐ฒ.
Using the one-liner below I was able to create a new Azure Web App (with all of the necessary pre-req’s such as a resource group and App Service plan) AND deploy my code!
I used the Azure CLI to do this running on my Windows 11 machine, to install the Azure CLI I used the following winget command within PowerShell.
winget install -e --id Microsoft.AzureCLI
Once this had been installed, I used az login to login to my Azure subscription and then ran the command below to provision the Web App and deploy the code (which is a Python Flask application). This command was run directly from the folder that contained the code for my Web App (which displays the video games within my collection) and did the following:
Creates a Resource Group (with an auto-generated name)
Creates an App Service Plan (with an auto-generated name)
Create a Web App with the name specified by -n (GamesWebApp)
All of these resources were created within the UK South region, denoted by -l
Uses the free (F1) SKU for Azure App Service
az webapp up --sku F1 -n "GamesWebApp" -l uksouth
Once the command completed the following was output:
I recently catalogued my retro gaming collection and took the high tech approach of using Excel to do this. I then decided to over-engineer this further and store the data in Azure Table storage……however I didn’t want to manually re-key all of the games!
I took a look at options for how I could automate reading an Excel file using Python – my plan being to write a script that would extract the data from the Excel file and then write this to Azure Table storage.
My Excel file is pretty basic with a sheet for each of the systems I own, within each sheet is a list of the games for that system:
Here is how I eventually figured out how to read data from an Excel file using Python…..
Step 1 – Install the pre-req modules
For this I needed to install two modules – pandas and openpyxl. I ran the following commands in a terminal to do this:
pip install pandas
pip install openpyxl
Step 2 – Create the script!
I wrote the script below (which took me far longer than I’d like to admit!), which does the following:
Obtains the names of each sheet within the Excel file – remember I have a separate sheet for each games console.
Loops through each of the sheets, opening each sheet individually.
For each sheet, it Iterates through each row in the sheet and prints the row (game).
excelfilepath = 'C:\\Users\\brend\\OneDrive\\Documents\\Retro Games Collection.xlsx' # Define the path to the Excel file to read
excel = pd.ExcelFile(excelfilepath)
sheets = excel.sheet_names # Read the names of each of the sheets within the Excel file
for sheet in sheets: # Loop through each sheet
print("")
print(sheet)
print("----------")
excel = pd.read_excel(excelfilepath,header = None, sheet_name= sheet) # Open the Excel file and the specific sheet within the sheets list
i = 0
while i < len(excel[0]) - 1: # Read the number of rows within the sheet and minus 1 to get the maximum index number for the while loop
print(excel[0][i]) # Print the row from the specific sheet using the index i
i += 1
Here is what it looks like in action:
The next step for me is to update the while loop so that it writes each game to Azure Table storage. I’ve written about how to do that using Python here.
I’m in the process of writing a Python (Flask) web app that will list all of the games in my retro gaming collection (and also allow me to add/edit them too) ๐น๏ธ.
My list of games are stored within Azure Table storage (I really love to over-engineer things!), so I needed to figure out how to add/query data within Azure Table storage using Python ๐.
Step 1 – Install the Python module for Azure Table storage
First things, first – I needed to install the Python module for Azure Table storage, I did this using the following command from a terminal:
pip install azure-data-tables
Step 2 – Connecting to the Storage Account
I then needed to connect to my Azure Storage account, I used the following to do this:
The key thing is to not specify the name of the table as part of the endpoint URL, when I did it would allow me to add entries to the table, but I was unable to query the table and received a cryptic error (which I wasted a lot of time figuring out).
To keep things simple I used an access key to connect to the storage account, I copied the key directly from the Azure Portal.
I also retrieved the endpoint URL from the portal.
Step 3 – Add an entry to the Azure table “games”
I firstly needed to connect to the table “games”, I did this with the following commands:
service = TableServiceClient(endpoint=endpoint, credential=credential)
gamestable = service.get_table_client("games")
I then defined the game (entity) to add to the table:
As this isn’t going to be a large table (<1000 rows), I opted to use a single PartitionKey, the RowKey is the name of the game and I defined a new field named System which is used to define the system that the game is for. In the example above this was Super Mario Land on the Nintendo Game Boy.
I could then add the game (entity) to the table using the following:
gamestable.create_entity(entity)
Step 4 – Verify that the game was added
I then wrote a query to return all games within the table to verify that the game had been successfully added to the table:
games = gamestable.query_entities(query_filter="PartitionKey eq '1'")
for game in games:
print(game["RowKey"])
This outputs the RowKey (game name) for every game listed in Partition 1 – as I only have a single partition this would return everything:
Step 5 – Querying for all games from a specific system
Here is an alternative query that lists all games from a specific system.
system = "GB"
games = gamestable.query_entities(query_filter="System eq " + "'" + system + "'")
for game in games:
print(game["RowKey"])
The name of the system to query is held within the system variable.
Next step for me is to write a script that takes the Excel file that contains a list of all my games and automagically add them to the table.
I stumbled across an interesting project on GitHub recently – Carbonyl is a Chromium based web browser that is built to run in a terminal!
It can be run either via npm or Docker. I opted to take Carbonyl for a spin using Docker (I run Docker Desktop on my Windows 11 machine). It was super-simple to run using the following command from a terminal.
docker run --rm -ti fathyb/carbonyl (URL to access)
I used the following command to access this blog:
docker run --rm -ti fathyb/carbonyl https://brendg.co.uk
Below, you can see a short video of this in action!
Once finished, hit CTRL+C, which will exit the container and remove it from Docker.
I have zero use-case for this; however, it is a lot of fun ๐.
The first time I used ChatGPT I was absolutely astounded by this powerful tool and the possibilities seemed endless. In typical fashion, once I found out that an API was available, I decided to have a poke around with it using Python ๐
I was pleasantly surprised as to the simplicity of calling the API and put together a sample that uses the recently released gpt-3.5-turbo model and provides the ability to fire a question off and see the response from OpenAI within a terminal.
I launched a terminal and ran the following to install the OpenAI Python module.
pip install openapi
Step 3 – It’s showtime!
Here is the script:
import openai
# Set the OpenAPI key, replace KEY with your actual key
openai.api_key = key = "KEY"
# Set the model to be used
engine = "gpt-3.5-turbo"
# Prompt for a question
question = input("What's your question?: ")
# Submit the question, using the default values for everything - https://platform.openai.com/docs/api-reference/completions
response = openai.ChatCompletion.create(
model= engine,
messages=[
{"role": "user", "content": question},
],
)
print(response['choices'][0]['message']['content'])
Here is the script in action – I asked it to create a Python script for me to calculate the date 100 days from now and it didn’t let me down ๐.
Next up in my personal backlog (yes, I am that sad) was to play around with the document summarization capabilities included within Azure Cognitive Services for Language.
But what is this, you may ask?
Document summarization uses natural language processing techniques to generate a summary for documents. Extractive summarization extracts sentences that collectively represent the most important or relevant information within the original content. These features are designed to shorten content that could be considered too long to read – Taken from here.
I had a quick play around with document summarization (using this code sample for inspiration) and put together the Python script below and available here, which does the following:
Takes a string of text and determines how many sentences are in this.
Passes this to the document summarization endpoint to summarize. Requesting a summary that includes no more than half of the number of sentences in the original string provided.
For example if 6 sentences are passed to the endpoint for summarization, the summary should include no more than 3 sentences.
Prints the summarized output.
import requests
import json
import time
text = """The Sega Mega Drive, also known as the Sega Genesis in North America, was a popular video game console that was first released in Japan in 1988.
It was Sega's third home console and was designed to compete with Nintendo's popular NES and SNES consoles.
The Mega Drive was released in North America in 1989 and quickly gained a strong following among gamers thanks to its impressive graphics, sound quality, and large library of games.
Some of the most popular games for the console include Sonic the Hedgehog, Streets of Rage, and Phantasy Star.
The Mega Drive remained in production until 1997 and sold over 40 million units worldwide, cementing its place as one of the most beloved video game consoles of all time.""" # Text to be summarized
sentences = len(text.split(".")) / 2 # calculate how many sentences there are to be summarized (from the "text" variable), divide this by 2. Therefore if there are 6 setencnes to be summarised, the total number of sentences included in the summarization will be 3.
url = "https://ENDPOINT.cognitiveservices.azure.com/language/analyze-text/jobs?api-version=2022-10-01-preview" # Replace ENDPOINT with the relevant endpoint
key = "KEY" # Key for Azure Cognitive Services
headers = {"Ocp-Apim-Subscription-Key" : key}
payload = {
"displayName": "Summarizer",
"analysisInput": {
"documents": [
{
"id": "1",
"language": "en",
"text": text
}
]
},
"tasks": [
{
"kind": "ExtractiveSummarization",
"taskName": "Summarizer",
"parameters": {
"sentenceCount": sentences
}
}
]
}
r = requests. Post(url,headers = headers,data = json.dumps(payload))
results = r.headers["operation-location"]
time.sleep(10) # Being super lazy here and putting in a sleep, rather than polling for the results to see when they are available!
r = requests.get(results,headers = headers)
for s in r.json()["tasks"]["items"][0]["results"]["documents"][0]["sentences"]:
print(s["text"])
I used ChatGPT to generate the text I used for testing (on one of my favourite subjects I may add!):
Here is the summary that it provided – it didn’t do too bad a job, did it?
I may start using this to summarize some of the super-long work e-mails I receive ๐.
I’ve been creating a Power App and needed to add some conditional logic to a button, the app I’ve been working on allows users to browse a list of registered mentors, view their profiles and submit a request for a mentoring session with a mentor using a button.
Within the app, I wanted the button on the mentor profile page that is used to request a session to display “Request {mentor first name} as a mentor“, the challenge I had is that for mentors with longer first names (>9 characters) it was causing the text to wrap and it looked ugly. I decided to add some logic to change the message displayed based on the length of the mentors first name, basically:
If the mentors first name is <10 characters display “Request {first name} as mentor”, otherwise displays “Request as Mentor”.
The other small challenge I had is that the mentor’s full name is held in a variable imaginatively named MentorName, therefore I needed to first split their full name so that I could pull out their first name. I achieved this with the split function using space ” ” as a delimiter. It then returns the first item from the resultant table outputted by the Split function (which would be the first name), using the First function.
This is then wrapped in an If function which uses the Len function to check the number of characters in the first name, if this is less than 10, return the name otherwise return nothing.
Below you can see an example of this in action, along with the Power Fx code. In this case MentorName = “Harrison Griffin”
"Request " & (If(Len(First(Split(MentorName," ")).Result)<10,(First(Split(MentorName," ")).Result),""))&" as Mentor"
The second screenshot shows the behaviour with a first name that is greater than 9 characters, in this case MentorName = “Christopher Griffin”
I’m in the process of preparing for some Power Platform exams so have been getting hands-on with Power Apps recently.
I created this video which steps through the process of creating a simple Power App that uses the camera on the device to take a photo and then saves this to OneDrive.
I’m about to record some demo videos and needed to set the resolution of the apps I will be recording to 1920 x 1080, there isn’t a straightforward way to do this out of the box with Windows (that I know of ๐ค). After much research I found the Python module PyGetWindow which can do this.
After installing the module using pip install PyGetWindow, I put together the following script which lists all of the current open apps and then sets the resolution of Notepad to 1920 x 1080 (using the name of the window, taken from the listing of open apps).
import pygetwindow
# Get all of the currently opened windows
windows = pygetwindow.getAllTitles()
# Print a list of the currently opened windows
for window in windows:
print(window)
# Specify the name of the window to resize
notepad = pygetwindow.getWindowsWithTitle("Untitled - Notepad")[0]
#resize the window
notepad.resizeTo(1920, 1080)
One of my hobbies is collecting video games, specifically retro games from the 80s and 90s.
My collection has grown over the years and it’s difficult for me to track what I own. On more than one occasion I’ve bought a game, later to realise that I already owned it ๐คฆโโ๏ธ. Over the holidays I had a brainwave……..why don’t I keep a list of the games that I have!
Rather than getting out a pen and paper to document my game collection (which would have been far simpler) I decided to use this as an excuse to play around with Azure Cognitive Services, specifically Computer Vision.
My plan was to take photos of my collection, pass these photos to Azure Cognitive Services to extract the text from the photos and then write this to a file, which I’ll then eventually put into a database or Excel (probably the latter ๐).
I took a photo of some of my games (PS3 so not exactly retro!) and then set about writing a Python script that used the REST API endpoint for the Computer Vision service to submit the photo and extract any detected text.
Below is the script in all its glory, it does the following:
Submits the photo to the REST API endpoint for Computer Vision.
Stores the results URL returned by the API.
Polls the results URL until the analysis has completed.
Prints out each piece of text detected.
Writes each piece of text to a text file, but only if the text returned is longer than 5 characters – this is to filter out other text detected, such as PS3 or the game ID, not exactly scientific but seems to do the trick! Hopefully, I don’t have any games with less than 5 characters in their title.
import requests
import time
from io import BytesIO
import json
# Sets the endpoint and key for Azure Cognitive Services
url = "https://RESOURCENAME.cognitiveservices.azure.com/vision/v3.2/read/analyze"
key = "KEY"
# Sets the location of the photo to analyze and opens the file
image_path = "D:/Games.png"
image_data = open(image_path, "rb").read()
# Submits the photo ("D:/Games.png") to the REST API endpoint for Computer Vision
headers = {"Ocp-Apim-Subscription-Key" : key,'Content-Type': 'application/octet-stream'}
r = requests.post(url,headers = headers, data=image_data)
# Retrieves the results URL from the response
operation_url = r.headers["Operation-Location"]
# The recognized text isn't immediately available, so poll the results URL and wait for completion.
analysis = {}
poll = True
while (poll):
response_final = requests.get(r.headers["Operation-Location"], headers=headers)
analysis = response_final.json()
print(json.dumps(analysis, indent=4))
time.sleep(1)
if ("analyzeResult" in analysis):
poll = False
if ("status" in analysis and analysis['status'] == 'failed'):
poll = False
# Store the returned text in a list "lines" also print out each line to the console
lines = []
for line in analysis["analyzeResult"]["readResults"][0]["lines"]:
print(line["text"])
lines.append(line["text"])
# Create a new text file "games.txt" and write the text to the file
gameslist = open("D:/games.txt", "a")
for line in lines:
if len(line) > 5: # This is to filter out other text detected, such as PS3 or the game ID, not exactly scientific but seems to do the trick! Hopefully I don't have any games with less than 5 characters in their name
print(line)
gameslist.write(line + "\n" )
gameslist.close()
Here’s the text file that it produced, it’s not perfect as ROBERT LUDLUM’S and BOURNECONSPIRACY is actually one and the same rather than being separate games. It will be interesting see how the Azure Cognitive Services (and my script!) holds up to analyzing games from other systems. My earliest being an Amstrad CPC 6128 (my very first computer).