More test lab building….and more issues! This time I needed to build a Windows Server 2022 running Hyper-V hosted in Azure, the plan was to use this VM to host other VMs (known as nested virtualization). I provisioned a VM using one of the Azure VM SKUs that supports nested virtualization however when I attempted to install Hyper-V on the VM it failed with the following error “Hyper-V cannot be installed: The processor does not have the required virtualization capabilities“
After much troubleshooting, I eventually figured out what the problem was, when provisioning the VM I should have configured the Security type as Standard rather than Trusted launch virtual machines.
Re-creating the VM using this setting enabled me to install Hyper-V and enjoy some nested virtualization goodness ๐ช.
I was recently building a lab environment for Microsoft Intune, as part of this I needed to provision a Windows 11 machine as I needed to do some testing of Windows Autopilot. I decided to host this on Hyper-V (running on my Windows 11 desktop PC) rather than using a physical device to keep things simple (at least that was the idea!).
I ran into an issue during installation and received the following error message –“This PC doesn’t meet the minimum system requirements to install this version of Windows”.
To fix this I needed to enable TPM support within the settings for the VM, it’s also worth noting that the VM should be created as a Generation 2 VM.
Once I’d enabled this setting I was able to successfully install Windows 11 ๐.
I was recently working on an Azure deployment which used a Virtual WAN as the hub with a number of spoke Virtual Networks (VNets). Azure Bastion was to be deployed into one of the spoke VNets and the plan was that this single instance of Azure Bastion would provide the ability to RDP/SSH into VMs hosted in the other spoke VNets within the environment (which have been connected to the Virtual WAN hub). This saved deploying Azure Bastion into each VNet – which could have been quite costly ๐ท.
It turns out that when Azure Bastion is deployed into a environment that uses a Virtual WAN rather than VNet peering to connect VNets together, it cannot connect to VMs hosted in VNets outside of the VNet where Azure Bastion has been deployed unless:
Re-provisioning the Azure Bastion to use Standard rather than Basic and enabling IP-based connection fixed this:
Once this had been done, I was able to connect to VMs in other VNets, however I needed to use the IP address to connect, the process of connecting via IP address is documented here.
I needed to read the value of a SharePoint choice column within a Power Automate Flow and then do something dependant on the value, in this case I needed to send an e-mail if the Approval status column is changed to Approved for any of the items within the list.
Due to the way that choice fields are returned to Power Automate, I couldn’t use a simple condition that checks the value of the Approval status column and then sends an e-mail if this is equal to Approved.
I first needed to initialise a variable that reads the value of the Approval status column and then use this variable (ApprovalStatus) within the condition….as below:
This Flow then sprang into life and started sending e-mails when an item was updated and the Approval status column was set to Approved.
I’ve previously used a Power Automate Flow to write responses from a Microsoft Forms survey directly to a SharePoint list – this makes it a little easier to analyse responses than using the native capabilities within Forms or exporting to an Excel file, particularly if you use Power BI as you can connect directly to the SharePoint list to analyse the data collected.
I recently had a situation where I had created a survey that had a question that permitted multiple answers, example below:
The SharePoint list that the responses were written to was configured with a field for this question which used a choice field (that covered each option within the question).
The problem I ran into, is that when multiple answers were passed to the SharePoint list using the Flow these were written as a string and didn’t use the choice field correctly – which looked ugly and made filtering the data difficult.
With some minor tweaks to the Flow I was able to correctly pass the answers to SharePoint and use the choice fields, here is how I did it:
Firstly, here’s the logical flow of the Flow (did you see what I did there ๐):
I needed to add three additional steps (actions) between Get response details and Create item:
1. Initialize Variable
This creates an array variable named AreaChoiceValue which we will populate later with the answers.
2. Compose
This takes the input from the question that allows multiple answers.
3.Apply to each
This uses the following JSON function to retrieve the output from the Compose step above – the answer(s) to the question.
Finally within this block we add Append to array variable using the following format/syntax and referencing the AreasChoiceValue variable created in Step 1:
This loops through each of the answers to the question and adds them to the AreaChoiceValue array. Which we can then reference in the SharePoint Create item action:
The choice value is then correctly populated in my SharePoint list:
If you have more than one question that permits multiple answers, you can repeat this within the same Flow (with different variables).
Hopefully this saves somebody some time and frustration.
I recently shared a script that I’d written that uses the Octopus Energy API to retrieve my electricity usage (in kWh) for the previous 7 days, this was useful however I wanted to take it a step further and get it to output the actual cost per day too ๐ท.
The one challenge I have is that I’m on the Octopus Go tariff, which provides cheaper electricity between the hours of 0:30-04:30 each day (perfect for charging my EV overnight ๐). This means that it’s not quite as simple as multiplying usage in kWh by the price per kWh to calculate the daily cost as the rate varies depending on the time of the day – here are details of the tariff for additional context:
To add to this, the Octopus Energy app currently doesn’t support providing the daily cost for this tariff (which was the main reason for me writing this script):
I eventually figured out how to do this and include the Python script below (with comments).
You will need to update the peak and off-peak rates (offpeakrate and peakrate variables) as these can vary based on your location.
I haven’t included the daily standing charge in the calculation.
You can increase the number of days to report on by changing the numberofdays variable
import requests
from requests.auth import HTTPBasicAuth
import datetime
# Set the peak and off-peak rates for Octopus Go
offpeakrate = 9.50
peakrate = 38.58
# The number of previous days to report on
numberofdays = 7
# Set the API key, meter MPAN and serial
key = "API Key"
MPAN = "MPAN"
serial = "Serial Number"
# Get today's date
today = datetime.date.today()
# Loop through the previous x number of days to report on (set by the "numberofdays" variable)
while numberofdays > 0:
peakusage = 0
offpeakusage = 0
fromdate = (today - datetime.timedelta(days=numberofdays)).isoformat() # Get the from date
todate = (today - datetime.timedelta(days=(numberofdays - 1))).isoformat() # Get the to date
# Call the Octopus API for the date range
baseurl = "https://api.octopus.energy/v1/"
url = baseurl + "electricity-meter-points/" + MPAN + "/meters/" + serial + "/consumption" + "?period_from=" + fromdate + "&period_to=" + todate
request = requests.get(url,auth=HTTPBasicAuth(key,""))
consumption = request.json()
numberofdays -= 1 # Minus 1 from the number of days variable (the loop will stop when this hits 0)
i = 0 # Used to index the results returned (48 results per day, one per 30 minutes)
for result in consumption["results"]: # Loop through the results returned for the specified day, extract the peak and off-peak units consumed and calculate the cost
if i in range(40,47): # These are the indexes of the off-peak hours (00:30-04:30)
offpeakusage = offpeakusage + result["consumption"]
else:
peakusage = peakusage + result["consumption"]
i += 1
# Calculate the peak / off-peak and total cost for the day in ยฃ's (rounded to 2 decimal places)
peakcost = round((peakusage * peakrate / 100), 2)
offpeakcost = round((offpeakusage * offpeakrate / 100), 2)
totalcost = round((peakcost + offpeakcost), 2)
# Print out the cost for the day
print("Usage for " + fromdate)
print("-Peak ยฃ" + (str(peakcost)))
print("-Offpeak ยฃ" + (str(offpeakcost)))
print("-Total cost for day ยฃ" + (str(totalcost)))
Here is an example of the output that the script provides:
I’m a stickler for keeping things nice and tidy (often to my detriment!). As with most people the downloads folder on my PC is a bit of a dumping ground and I was forever deleting the contents of it….the same goes for emptying my recycle bin. I’m always looking to automate things to save me some clicks so spent some time writing a script in PowerShell that does these very two things!
Deletes EVERYTHING in the Downloads folder within my user profile
If you are as equally sad as me and would like to use this, you’ll need to update the file path to point towards yourdownloads folder, which should be as simple as replacing “brendan” with your username.
I also created a batch file that I can run that then executes this PowerShell script for me (that way I can simply right-click the batch file to execute the PowerShell script). To use this update the file path to point it towards the location of the Clean.ps1 PowerShell script.
I watch a lot of content on YouTube (particularly John Savill’s amazing Azure channel) for channels that have been around for a while it’s difficult to navigate through the back catalogue to identify videos to watch ๐.
I recently had the brainwave of writing a script, that can connect to a YouTube channel and write out a list of all videos and their URLs to a CSV file to help me out here…luckily for me YouTube has a rich API that I could use to do this.
You will need a key to access the YouTube API, here is a short video I put together that walks through the process of creating one:
Below is the Python script that I put together (with comments) that uses the Requests module to do just this! You will also need to update the key, channel and csv variables prior to running the script.
import requests
# Set the key used to query the YouTube API
key = "KEY"
# Specify the name of the channel to query - remember to drop the leading @ sign
channel = "NTFAQGuy" # the very reason that I wrote this script!
# Set the location of the CSV file to write to
csv = "C:\\videos.csv" # Windows path
try:
# Retrieve the channel id from the username (channel variable) - which is required to query the videos contained within a channel
url = "https://youtube.googleapis.com/youtube/v3/channels?forUsername=" + channel + "&key=" + key
request = requests.get(url)
channelid = request.json()["items"][0]["id"]
except:
# if this fails, perform a channel search instead. Further documentation on this: https://developers.google.com/youtube/v3/guides/working_with_channel_ids
url = "https://youtube.googleapis.com/youtube/v3/search?q=" + channel + "&type=channel" + "&key=" + key
request = requests.get(url)
channelid = request.json()["items"][0]["id"]["channelId"]
# Create the playlist id (which is based on the channel id) of the uploads playlist (which contains all videos within the channel) - uses the approach documented at https://stackoverflow.com/questions/55014224/how-can-i-list-the-uploads-from-a-youtube-channel
playlistid = list(channelid)
playlistid[1] = "U"
playlistid = "".join(playlistid)
# Query the uploads playlist (playlistid) for all videos and writes the video title and URL to a CSV file (file path held in CSV variable)
lastpage = "false"
nextPageToken = ""
while lastpage:
videosUrl = "https://www.googleapis.com/youtube/v3/playlistItems?part=snippet%2CcontentDetails&playlistId=" + playlistid + "&pageToken=" + nextPageToken + "&maxResults=50" + "&fields=items(contentDetails(videoId%2CvideoPublishedAt)%2Csnippet(publishedAt%2Ctitle))%2CnextPageToken%2CpageInfo%2CprevPageToken%2CtokenPagination&key=" + key
request = requests.get(videosUrl)
videos = request.json()
for video in videos["items"]:
f = open(csv,"a")
f.write(video["snippet"]["title"].replace(",","") + "," + "https://www.youtube.com/watch?v=" + video["contentDetails"]["videoId"] + "\n")
f.close()
try: # I'm sure there are far more elegant ways of identifying the last page of results!
nextPageToken = videos["nextPageToken"]
except:
break
I’ve recently changed my energy supplier to Octopus……the main reason being their super-cheap overnight electricity rates, which will save me lots of money charging my EV car ๐ท.
I noticed that they had a developer API, ever the tinkerer I thought I’d take a closer look. Their documentation is really extensive, however their examples all used Curl and I wanted to have a play with Python (using the Requests module). I ran into a couple of issues so thought I’d document this to help others (although most likely my future self when I forgot all of this ๐)
Issue 1 – authenticating to the API using a key
The API uses HTTP basic auth (which uses a key that is available on the API access page), after much searching I found the equivalent for the -u parameter in Curl to enable me to successfully authenticate using the key. The trick was import HTTPBasicAuth using the following command:
from requests.auth import HTTPBasicAuth
Then when making the request to the API using the following syntax, which passes the API key (key variable) and the username, with a blank password (denoted by “”).
Issue 2 – formatting the date for period_from correctly
The API allows you to pass it a period_from parameter, this is useful to get your energy consumption from a specific date. In my specific use-case, I wanted to see my consumption from the previous 7 days. I achieved this using the following:
Pulling all of this together, I created the script below which connects to the developer API and retrieves my electricity consumption for the previous 7 days (grouped by day) and outputs this to the console – if you’d like to use this you’ll need to update the key, MPAN and serial variables – all of which are listed on this page (if you are a customer of course!)
import requests
from requests.auth import HTTPBasicAuth
import datetime
date7daysago = (datetime.datetime.now() - datetime.timedelta(days=7)).isoformat() # calculate the date 7 days ago
key = "KEY"
MPAN = "MPAN"
serial = "SERIAL"
baseurl = "https://api.octopus.energy/v1/"
url = baseurl + "electricity-meter-points/" + MPAN + "/meters/" + serial + "/consumption" + "?period_from=" + date7daysago + "&group_by=" + "day"
request = requests.get(url,auth=HTTPBasicAuth(key,""))
request.json()
Here is the output of the script – you may notice that it doesn’t include 7 days worth of data, that is because I haven’t been a customer for that long.
To make it a little easier to read I added the following to the script, which prints out the date and consumption:
consumption = request.json()
for result in consumption["results"]:
print(str(result["interval_start"].split("T")[0]) + " : " + str(result["consumption"]))
Based on my previous escapades with developer API’s for consumer services I’m sure that I’ll be writing an Alexa skill for this next ๐ค.
I have a Raspberry Pi 400, which I run RetroPie on – if you are into retro gaming and have a Pi I cannot recommend this enough! I have this attached to a 4K TV using HDMI, I needed to change the resolution as I wanted to reduce this to 1920 x 1080 (as I had some issues with one of the emulators running at 4K). I’d usually change the resolution using raspi-config.
I ran into a issue with raspi-config, whereby the option to change resolution within Display Options > Resolution was acting weirdly – it was dropping me back to the main menu when selecting Resolution, as RetroPie doesn’t have the Raspberry Pi OS GUI I wasn’t sure what other options I had to change the resolution.
It turns out this wasn’t as difficult as I thought, I just needed to edit /boot/config.txt, I ran the following command to do this in a terminal (via SSH):
sudo nano /boot/config.txt
Within this file there was two lines I needed to un-comment to overwrite the current configuration:
hdmi_group and hdmi_mode. I configured hdmi_group to 1 (which means the device is connected to a TV) and hdmi_mode to 16, which equates to 1080p @ 60hz – a full reference for the various modes can be found here.
I gave the Pi a reboot and voila, I had glorious 1080p resolution ๐บ.