Tag: technology

  • Installing a desktop environment on a Linux VM hosted in OCI and making this available using RDP πŸ–₯️

    Next up in random things Brendan has done……installing a desktop environment (Gnome) on a Linux instance (Ubuntu) hosted in OCI and making this available via Remote Desktop Protocol (RDP) with xrdp – it sounds quite complicated but there isn’t that much to getting it up and running βœ….

    Basically, I wanted a VM that I can RDP to from anywhere….and any computer, importantly! To do basic coding (is in my coding is all basic πŸ˜€) using Visual Studio Code and Python.

    To keep the costs down (I’m a tight Yorkshireman after all) I’m using an Always Free Ampere A1 VM instance running in OCI – so this will not cost me a penny to run πŸ™Œ.

    To learn more about the OCI Always Free resources, check this article out.

    To get started, I created a Linux instance using Ubuntu 24.04:

    I placed this into a Public Subnet within a Virtual Cloud Network, to learn more about how to do this, check this guide out – the reason for placing the VM into a Public Subnet is so that it gets a public IP address and I can connect to this directly over the Internet, without requiring that a VPN or FastConnect be in-place.

    Once the VM had been provisioned, I SSH’d onto the VM instance (if you are not sure how to do this, check this guide out) and then ran the following commands in order:

    Update and Upgrade Installed Packages

    sudo apt update && sudo apt upgrade -y
    
    

    Install Ubuntu Desktop

    sudo apt install ubuntu-desktop -y
    

    Install xrdp

    sudo apt install xrdp -y
    

    Ensure that Gnome runs (the Ubuntu Desktop Environment) when logging in via RDP

    echo "gnome-session" > ~/.xsession
    

    Restart xrdp

    sudo systemctl restart xrdp
    

    Permit inbound traffic on TCP port 3389 (the port used by RDP)

    sudo iptables -I INPUT 4 -m state --state NEW -p tcp --dport 3389 -j ACCEPT
    sudo netfilter-persistent save
    
    

    Set a password for the user “ubuntu” by default OCI configures the VM instance to authenticate the ubuntu user using SSH keys, for RDP you’ll need to use a password – you may prefer to use a separate non-root account for this.

    sudo passwd ubuntu
    

    Once those commands have been run, the final thing you’ll need to do is ensure that any Security Lists OR Network Security Groups (NSGs) that the VM instance is associated with permit inbound access to port 3389 – the port used by RDP.

    More info on this (including how to do this) can be found here.

    Here is how my Security List looks (there isn’t an NSG associated with my VM instance).

    WARNING: This gives any machine on the Internet (source CIDR 0.0.0.0/0) access to this VM instance…..and any other resources in the subnet via RDP – port 3389! You’d likely want to restrict this to specific IP addresses or IP address ranges e.g. the public IP address you break out from your house/office to prevent any randomer on the Internet getting access.

    Once the Security List had been updated. I fired up the Microsoft RDP client (other RDP clients are available!) and configured it to connect to the public IP address of the VM instance and VoilΓ  – I now have access to the desktop on my Ubuntu VM instance from anywhere.

  • Deploying an OCI Landing Zone using Terraform πŸ›©οΈ

    OCI has a number of Terraform based Landing Zone blueprints available.

    The One OE (Operating Entity) OCI LZ blueprint can be deployed to an OCI tenancy directly from GitHub using the “Deploy to OCI” button:

    This then uses OCI Resource Manager to deploy the blueprint to a tenancy – which uses Terraform under the hood.

    I wanted to deploy the One OE blueprint to one of my test tenancies, however I wanted to do this natively using Terraform from my local machine rather than using OCI Resource Manager, mainly due to the additional flexibility and ease of troubleshooting that this approach provides.

    It took me a while to figure out exactly how to do this (with a lot of help from one of the OCI LZ Black Belts πŸ₯‹).

    I’ve documented the process that I followed below, hopefully it saves somebody else some time ⌚️.

    βœ… Step 0Make sure you have Terraform and Git installed (I’m assuming that you already have these installed locally).

    βœ… Step 1 – Create a directory to store the blueprints and configuration

    I created a folder aptly named “OCI One OE Landing Zone

    …then opened a terminal and ran the following commands from within this folder:

    git clone https://github.com/oci-landing-zones/oci-landing-zone-operating-entities.git
    git clone https://github.com/oci-landing-zones/terraform-oci-modules-orchestrator.git

    These commands download the OCI OE Landing Zone blueprints and the Landing Zone Orchestrator.

    Once the downloads have completed, the folder should look something like this:

    βœ… Step 2 – Configure Authentication

    Grab a copy of the file oci-credentials.tfvars.json.template, which is located within the folder OCI One OE Landing Zone/oci-landing-zone-operating-entities/commons/content.

    Take a copy of this file, place it in the root of the OCI One OE Landing Zone folder that you just created and rename the file to oci-credentials.tfvars.json

    Open the oci-credentials.tfvars.json file and populate with your authentication information, if you don’t have this please follow the guide here to create an API Signing Key and obtain the other required information.

    Here’s an example of what mine looks like:

    βœ… Step 3 – Grab a copy of the required configuration files

    In order to deploy the One OE Landing Zone, a number of configuration files are required, these can be found within the following folder:

    ‘OCI One OE Landing Zone/oci-landing-zone-operating-entities/blueprints/one-oe/runtime/one-stack’

    • oci_open_lz_one-oe_governance.auto.tfvars.json
    • oci_open_lz_one-oe_iam.auto.tfvars.json
    • oci_open_lz_one-oe_security_cisl1.auto.tfvars.json
    • oci_open_lz_hub_a_network_light.auto.tfvars.js
    • oci_open_lz_one-oe_observability_cisl1.auto.tfvars.json

    Copy these files into the root of the OCI One OE Landing Zone folder – you can leave them in their original location, but by taking a copy this means that you can edit them (if needed) and easy return them to their “vanilla” state by re-copying across from their original location.

    βœ… Step 4 – Time to deploy πŸš€

    Run the following command from within the OCI One OE Landing Zone/terraform-oci-modules-orchestrator folder to download the required Terraform Providers and Modules

    terraform init

    Once this has completed, run terraform plan (from the same folder), referencing the required configuration files:

    terraform plan \
    -var-file ../oci-credentials.tfvars.json \
    -var-file ../oci_open_lz_one-oe_governance.auto.tfvars.json \
    -var-file ../oci_open_lz_one-oe_iam.auto.tfvars.json \
    -var-file ../oci_open_lz_hub_a_network_light.auto.tfvars.json \
    -var-file ../oci_open_lz_one-oe_security_cisl1.auto.tfvars.json \
    -var-file ../oci_open_lz_one-oe_observability_cisl1.auto.tfvars.json 

    ….if all goes well, you can run terraform apply (from the same folder) using the exact same configuration files.

    terraform apply \
    -var-file ../oci-credentials.tfvars.json \
    -var-file ../oci_open_lz_one-oe_governance.auto.tfvars.json \
    -var-file ../oci_open_lz_one-oe_iam.auto.tfvars.json \
    -var-file ../oci_open_lz_hub_a_network_light.auto.tfvars.json \
    -var-file ../oci_open_lz_one-oe_security_cisl1.auto.tfvars.json \
    -var-file ../oci_open_lz_one-oe_observability_cisl1.auto.tfvars.json 

    Within a few minutes, you should (hopefully!) have a beautiful OCI Landing Zone deployed within your tenancy.

  • Why isn’t DHCP working on the secondary VNIC of an OCI VM instance? βŒ

    Every day is a school day – especially with OCI!

    I was recently playing around in my lab and needed to add a secondary VNIC to one of my VMs for some testing that I was doing.

    I quickly set about adding a secondary VNIC and used the default option of assigning an IP address automatically using DHCP rather than specifying a static IP address (I’m lazy, I know!).

    I gave the server a reboot, logged in and to my surprise the shiny new secondary VNIC had acquired a nasty APIPA address (169.x.x.x) rather than the dynamic IP address that OCI had assigned (10.0.1.69) ❌:

    What is an APIPA address you may ask:

    “An APIPA (Automatic Private IP Addressing) IP address isΒ a self-assigned address in the 169.254.x.x range that a device uses when it cannot get an IP address from a DHCP server.Β This feature allows devices on a local network to communicate with each other even when the DHCP server is down, providing basic connectivity”

    I deleted and re-added the VNIC, rebooted the server more times that I care to admit – but still nothing, I couldn’t get rid of this pesky APIPA IP address and get the “real” IP address that OCI had assigned (10.0.1.69).

    After realising I’d sunk far too much time on this, I reached out to a colleague who is a networking whizz in OCI who informed me that OCI will only use DHCP for the primary VNIC on VM instances – for any secondary VNICs that you add to a VM instance, these must be configured with a static IP address (why oh why didn’t I ask them sooner 😫).

    This is quite confusing as the OCI console allows you to add a secondary VNIC and specify DHCP – it just doesn’t work πŸ€¦β€β™‚οΈ.

    It will even display the “dynamic” IP address that has been assigned to the instance in the console – it just won’t be picked up by the underling OS on the VM instance as DHCP doesn’t work:

    Moral of the story, when adding a secondary VNIC (or tertiary for that matter) use static IP addressing βœ….

    Note that whilst this affected a Windows Server in my case, this also applies to Linux too.

    Hopefully my pain, will help somebody else in the future!

  • Error building a container when using the OCI Cloud Shell πŸ«™

    This afternoon I was using the OCI Cloud Shell to build a container to be pushed to the OCI Container Registry which I was then going to create an OCI Container Instance from. This is something that I’ve done countless times without any issues, as I was short of time (I’m going on holiday tomorrow) as is typical, anything that could go wrong, did 😭.

    When running the following command from the OCI Cloud Shell to build the container.

    docker build --tag container-name .
    

    It returned the following error (interesting bits in bold).

    Error: committing container for step {Env:[PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LANG=C.UTF-8 GPG_KEY=E3FF2839C048B25C084DEBE995E310250568 PYTHON_VERSION=3.9.21 PYTHON_SHA256=3126f59592c9b0d7955f2bf7b081fa1ca35ce7a6fea980108d752a05bb1] Command:run Args:[pip3 install -r requirements.txt] Flags:[] Attrs:map[] Message:RUN pip3 install -r requirements.txt Heredocs:[] Original:RUN pip3 install -r requirements.txt}: copying layers and metadata for container “4aa0c966251fa75dac10afc257b8c8d62aae50c45eb5dd1157d3c1cae0208413”: writing blob: adding layer with blob “sha256:5699f359aa00daa8a93b831b478fea1fe7c339396e532f13e859fb4ef92fd83f”: processing tar file(open /usr/local/lib/python3.9/site-packages/oci/addons/adk/__pycache__/agent_client.cpython-39.pyc: no space left on device): exit status 1

    After much Googling (without much luck I may add!) I had a brainwave – the OCI Cloud Console only provides 5GB storage as per the documentation – perhaps I’d hit the storage limit πŸ€”:

    It turned out that the majority of the storage consumed was by Docker / Podman (as a side note the Cloud Shell now uses Podman, however the Docker commands are aliased to it, so you can continue to use them).

    So……it looked like I needed to do some housekeeping 🧹.

    To identify the storage used by Docker / Podman, you can run the following command:

    docker system df
    

    Which returned the following:

    To free up some space I ran the following command (which is a little brute force, I may add πŸ”¨):

    docker system prune -a
    

    Using my YOLO approach, I selected y to continue which worked its magic and free’d up some space (please take heed of the warnings ⚠️).

    I then had plenty of free space and could build the container successfully βœ…

    I can now enjoy my holiday, safe in the knowledge that I managed to fix this issue πŸ—ΊοΈ.

  • Getting the output of a SQL tool in an OCI Gen AI Agent πŸ“Š

    The Generative AI Agent service in OCI recently added the ability to add a SQL Tool, this enables an agent to generate a SQL query and optionally run the query against a database and return the results of the query to the agent πŸ€–. I created a short video that steps through how to use a SQL Tool βš’οΈ with an agent, which can be found here πŸ“Ό.

    More recently (mid-July 2025) the SQL Tool has been further enhanced so that responses include the following:

    • The raw output of the SQL query
    • A conversational “LLM style” response

    Previously a SQL Tool would only return the raw output of the SQL query, I found this quite useful as I could use Python packages such as matplotlib to visualise results, as of mid-July responses from the agent also include an LLM style conversational response, for example (taken from my agent that queries a database of bird sightings πŸ¦…):

    Raw Output of SQL Query

    Conversational LLM Style Response

    I’ve put together a short Python script that demonstrates how to get access to this data from a response, I typically use Streamlit as a front-end for the demo agents that I build, however to keep things simple, we’ll use the good old “shell” for this demo!

    Here is the script –

    import oci
    textinput = "what were the 3 most popular birds in 1997"
    config = oci.config.from_file(profile_name="DEFAULT")
    service_ep = "https://agent-runtime.generativeai.uk-london-1.oci.oraclecloud.com"
    agent_ep_id = "ocid1.genaiagentendpoint.oc1.uk-london-1.xwywwkz7bn5f5aogazpvkijnoj2u75yadsq"
    generative_ai_agent_runtime_client = oci.generative_ai_agent_runtime.GenerativeAiAgentRuntimeClient(config,service_endpoint=service_ep)
    create_session_response = generative_ai_agent_runtime_client.create_session(
        create_session_details=oci.generative_ai_agent_runtime.models.CreateSessionDetails(
            display_name="Session",
            description="Session"),
        agent_endpoint_id=agent_ep_id)
    sess_id = create_session_response.data.id
    response = generative_ai_agent_runtime_client.chat(
        agent_endpoint_id=agent_ep_id,
        chat_details=oci.generative_ai_agent_runtime.models.ChatDetails(
            user_message=textinput,
            session_id=sess_id))
    output = response.data.traces[3].output
    output = eval(output)
    sql_response = output["result"]
    print("")
    print("SQL Response: " + str(sql_response))
    text_response = response.data.message.content.text
    print("")
    print("Text Response: " + str(text_response))
    

    To use this script you’ll need to update the following:

    Finally make sure you have the latest version of the OCI SDK for Python, to upgrade to the latest version run the following command –

    pip3 install oci --upgrade

    When run the output should look something like this:

    Here is an example of how I’ve used matplotlib (within a Streamlit front-end) to visualise results using the raw output of the SQL query.

    As you can see below, it returns the conversational response, I then take the raw SQL output and use matplotlib to make it look pretty πŸ’„ – I may put together a post on this too.

    Thanks for reading!

  • Publishing a Streamlit App as a Container Instance in OCI β›΄οΈ

    I previously wrote about how to create a basic front-end for an OCI Generative AI Agent using Streamlit (which can be found here) 🎨.

    I often use Streamlit to create quick customer demo’s and PoCs for OCI Generative AI Agents. One thing that is really useful is the ability to run a Streamlit app within a container instance rather than locally on my laptop – which is ideal when I need to quickly give others access to the apps that I have built.

    Here is a quick guide as to how to take a Streamlit app and run this within an OCI Container Instance πŸ“‹.

    Step 1Ensure Container Instances have access to the Gen AI Agent service and Container Registry βœ…

    To do this we will need to create a Dynamic Group within OCI IAM, with the following rule:

    ALL {resource.type='computecontainerinstance'}

    This rule will ensure that every Container Instance within the tenancy is added to the Dynamic Group, which in this example is named “ContainerInstances” – how original! In the real-world, you may want to be more specific and specify a single container instance or Compartment as a member of the Dynamic Group.

    Now that the Dynamic Group has been created, we need to create a Policy that provides this group (e.g. all container instances within the tenancy) access to pull images from the OCI Container Registry and also grant it access to the OCI Generative AI Agents service, the reason for the latter is that we will use Resource Principal authentication to authenticate the container instance to the service, rather than the API Keys for a specific user account (which is safer as we won’t need to include any keys within the container image! πŸ”‘).

    The policy should have the following two statements:

    Allow dynamic-group ContainerInstances to read repos in tenancy
    Allow dynamic-group ContainerInstances to manage genai-agent-family in tenancy

    Now that we’ve got the Dynamic Group and Policy created, we can move on to Step 2!

    Step 2 – Obtain an auth token and get the tenancy namespace βœ…

    An auth token is required to authenticate to the OCI Container Registry service, which is required when pushing the container image to the registry.

    To create an Auth Token, do the following:

    Make sure that you copy the Auth Token somewhere safe as you will not be able to re-retrieve it after creation ⛔️.

    We now need to get the tenancy namespace, which is required to authenticate to the Container Registry, this can be obtained as follows:

    Now onto Step 3 πŸ‘‡

    Step 3 – Create a Container Image of the Streamlit App βœ…

    The code that I will use for the Streamlit App can be found on GitHub, this is a basic app that connects to an OCI Generative AI Agent and allows a user to ask the agent questions:

    Once you have this, two additional files are required to create the container image:

    requirements.txt, which should contain the following and includes the Python packages required to run the Streamlit app:

    streamlit
    oci

    …and Dockerfile (no file extension required!), which is used to create the container image. This will launch the Streamlit app listening on port 80. Ensure that you update the name of the Python script (in this case OCI-GenAI-Agent-Streamlit.py) to reflect the name of the script you need to run.

    FROM python:3
    WORKDIR /app
    COPY . /app
    RUN pip install --no-cache-dir -r requirements.txt
    EXPOSE 80
    ENTRYPOINT ["streamlit", "run", "OCI-GenAI-Agent-Streamlit.py", "--server.port=80", "--server.address=0.0.0.0"]

    Place the requirement.txt, Dockerfile and Python script into a single directory:

    …and then zip this up.

    Now login to the OCI Console, launch Cloud Shell, upload the zip file and uncompress (this is a quick way to transfer the files).

    We can now create the container image and upload this to the container registry, to do this run the following commands – make sure you run these from the directory that has been un-zipped, which contains the Streamlit app.

    docker login lhr.ocir.io --username namespace/username 

    The namespace was obtained in Step 2, the username is your username (what else could it be πŸ˜‚), for example within my case, this is:

    docker login lhr.ocir.io --username lrdkvqz1i7e6/brendankgriffin@hotmail.com 

    You may also need to update lhr.ocir.io to the correct endpoint for the container registry in your tenancies region, a full list of endpoints can be found here.

    It will then prompt for your password, for this you will need to enter the Auth Token 🎫 obtained in Step 2 (you did save this, right?)

    Here’s a short run-through of this:

    Next step is to build the container image and upload this to the container registry, you will need to run the following commands to do this.

    docker build --tag lhr.ocir.io/lrdkvqz1i7e6/streamlit:latest .

    Make sure that you update the endpoint (lhr.ocir.io) if needed and namespace (lrdkvqz1i7e6). This command will build the container image and tag it with the name streamlit:latest – this command needs to be run from the un-zipped directory that contains the Streamlit app files.

    Once it has built, it can be pushed to the OCI Container Registry using the following command:

    docker push lhr.ocir.io/lrdkvqz1i7e6/streamlit:latest

    Update the namespace and endpoint appropriately.

    Here’s a short walkthrough of this:

    Step 4 – Create a container instance from the container image βœ…

    We are nearly there 🏁, the final step is to create a container instance from the container image that we have just pushed to the container registry.

    To do this, you’ll need a Virtual Cloud Network (VCN) that has a public subnet (so that we can make the instance available over the Internet 🌍), if you don’t have one, you can use the VCN Wizard to quickly do this, as documented here.

    Make sure you have a Security List πŸ“‹ entry that permits access to the public subnet within the VCN on port 80 – in my case from any public IP address, but you may want to restrict this to specific public IP addresses.

    Once you have confirmed that you have a VCN in place, we can go through the process of creating the container instance using the container image that we just created.

    I’ve used the default settings for creating a container instance, in the real-world, you’d need to select an appropriate compute shape (CPU/memory).

    Grab the public IP address assigned to the container instance and open this in your browser of choice, the Streamlit app should open (all being well!).

    You may want to create a DNS entry and point this towards the public IP, to make it easier to access.

    Also in my final disclaimer, for anything but quick and dirty demo’s you should run this over SSL with authentication too! An OCI Load Balancer can be used to do SSL termination and Streamlit provide a useful guide on performing authentication, which can be found here.

    …and that’s it!

  • Crawling a web site using Trafilatura πŸ•·οΈ

    I’ve been building a lot of OCI Generative AI Agents for customer demos recently πŸ€–, one demo that typically resonates well with customers is a RAG agent that uses text scraped from their public website, for example when working with a council this can demonstrate how residents can use a Generative AI Agent to quickly get answers to their questions about council services…….without the hassle of navigating their maze of a website πŸ‘©β€πŸ’».

    For reference here’s how an OCI Gen AI Agent works at a high-level.

    In the real world a Gen AI Agent would use internal data that isn’t publicly accessible, however I typically don’t have access to customers data, therefore the approach of crawling their public website works well to showcase the capabilities of a Gen AI Agent and begin a conversation on real-world use-cases that use internal data πŸ“Š.

    I wrote a very hacky Python script to crawl a site and dump the content to a text file which can then be ingested into a Gen AI Agent…….however this is super unreliable as the script is held together with sticking plasters 🩹 and constantly needs to be updated to work around issues experienced when crawling.

    I recently stumbled across a fantastic Python package named Trafilatura which can reliably and easily scrape a site, enabling me to retire my hacky Python script 🐍.

    Trafilatura can be installed using the instructions here (basically pip install trafilatura).

    Once it had been installed, I was able to scrape my own blog (which you are currently reading) using two commands!

    trafilatura --sitemap "https://brendg.co.uk/" --list >> URLs.txt
    trafilatura -i URLs.txt -o txtfiles/
    

    The first command grabs the sitemap for https://brendg.co.uk, and writes a list of all URLs found to URL.txt.

    The second command takes the URL.txt file as input and for each URL within, crawls the page and writes the contents to a text file within the folder txtfiles.

    Below is an example of one of the text files that have been output, you can clearly see the text from the blog post scraped.

    Such a useful tool, which will save me a ton of time ⏱️!

  • Batch Converting Word Documents to PDF using Python πŸ

    I’ve been working on a project deploying an OCI Generative AI Agent πŸ€–, which I’ve previously spoken about here πŸ“Ό.

    Marketing blurbOCI Generative AI Agents is a fully managed service that combines the power of large language models (LLMs) with AI technologies to create intelligent virtual agents that can provide personalized, context-aware, and highly engaging customer experiences.

    When creating a Knowledge Base for the agent to use, the only file types that are supported (at present) are PDF and text files. I had a customer that needed to add Word documents (DOCX format) to the agent, rather than converting these manually which would have taken a lifetime πŸ•£, I whipped up a Python script that uses the docx2pdf package – https://pypi.org/project/docx2pdf/ to perform a batch conversion of DOCX files to PDF, one thing to note is that the machine that runs the script needs Word installing locally.

    Here is the script πŸ‘‡

    import os
    import docx2pdf # install using "pip install docx2pdf" prior to running the script
    os.chdir("/Users/bkgriffi/Downloads") # the directory that contains the folders for the source (DOCX) and destination (PDF) files
    def convert_docx_to_pdf(docx_folder, pdf_folder): # function that performs the conversion
        for filename in os.listdir(docx_folder):
            if filename.endswith(".docx"):
                docx_path = os.path.join(docx_folder, filename)
                pdf_filename = filename[:-5] + ".pdf"
                pdf_path = os.path.join(pdf_folder, pdf_filename)
                try:
                    docx2pdf.convert(docx_path, pdf_path)
                    print(f"Converted: {filename} to {pdf_filename}")
                except Exception as e:
                    print(f"Error converting {filename}: {e}")
    convert_docx_to_pdf("DOCX-Folder", "PDF-Folder") # calling the function, with a source folder named DOCX-Folder and a destination folder named PDF-Folder, these folders should reside in the directory specified in line 4
    

    Folder structure πŸ—‚οΈ

    Source DOCX files πŸ“„

    Script Running πŸƒ

    Output PDF files

    Once the documents have been converted to PDF format they could be added to an OCI Storage Bucket and ingested into the OCI Generative AI Agent.

  • Transcribing speech to text using the OCI AI Speech service with Python πŸŽ€

    I’ve been playing around with the OCI AI Speech service recently, one thing I really struggled with was using the AI Speech API to create a transcription job to extract the text from an audio/video file (as I needed to automate the process).

    After much head scratching (…and some help from a colleague), I was able to assemble the following Python script, this provides a function named transcribe, which can be called to submit a transcription job. The following parameters are required:

    • inputfile – The name of the audio/video file to transcribe e.g. recording.mp3
    • bucket – The name of the bucket that contains the inputfile to transcribe (this is also where the JSON output of the transcription job will be stored)
    • compartmentid – OCID of the compartment to run the transcription job in
    • namespace – The Object Storage namespace
    import oci
    
    config = oci.config.from_file()
    
    def transcribe(inputfile,compartmentid,bucket,namespace):
        ai_speech_client = oci.ai_speech.AIServiceSpeechClient(config)
        create_transcription_job_response = ai_speech_client.create_transcription_job(
                create_transcription_job_details=oci.ai_speech.models.CreateTranscriptionJobDetails(
                    compartment_id=compartmentid,
                    input_location=oci.ai_speech.models.ObjectListInlineInputLocation(
                        location_type="OBJECT_LIST_INLINE_INPUT_LOCATION",
                        object_locations=[oci.ai_speech.models.ObjectLocation(
                            namespace_name=namespace,
                            bucket_name=bucket,
                            object_names=[inputfile])]),
                    output_location=oci.ai_speech.models.OutputLocation(
                        namespace_name=namespace,
                        bucket_name=bucket)))
    
    transcribe(inputfile="Name of file to transcribe",compartmentid="OCID of the compartment to run the transcription job in",bucket="Bucket that contains the file to transcribe",namespace="Object storage namespace")
    

    For example:

    transcribe(inputfile=”recording.mp3“,compartmentid=”ocid1.compartment.oc1..aaaaaaaae“,bucket=”Transcription“,namespace=”lrdkvqz1i7f9“)

    When this has been executed, the transcription job can be viewed within the OCI Console.

    Once the job completed, the transcription was available to view from within the job (clicking the filename within the Tasks section):

    Here is the transcript in all it’s glory.

    The sample can also be found on GitHub.

  • Unable to create a container instance in OCI

    I was working with a customer to deploy a Docker image that I’d added to their OCI Container Registry, however when provisioning a Container Instance using this image it was failing with the following error πŸ›‘:

    A container image provided is not compatible with the processor architecture of the shape selected for the container instance.

    This is a pretty descriptive error message, that you will receive when attempting to deploy a container on a host machine that has a different CPU architecture than that of the image you are attempting to deploy, for example trying to deploy a container that uses an x64 based image to a host machine that has an ARM CPU.

    In this specific case, I was attempting to deploy a container to a AMD x64 machine – something which I had done numerous times successfully with this very image – a real case of “it works on my machine!“. After much head scratching I figured out what I’d done wrong πŸ’‘.

    I had used the Cloud Shell to create the image and deploy to the Container Registry (I ❀️ the Cloud Shell!).

    It turns out that it’s possible to select the arcihtecture to use for the Cloud Shell, I had been using x64 in my tenant, however the admin at the customer had ARM configured for their Cloud Shell therefore when it was building the Docker image it was pulling the ARM version of the base image therefore failing when attempting to deploy this to an AMD x64 host.

    There are two options to fix this:

    1. Provision the Container Instance on an Ampere (ARM) host
    2. Re-create the image using a Cloud Shell with the desired CPU architcture, in this case x64

    I was lazy and opted for option 1, to however to change the CPU architecture for Cloud Shell:

    • Launch Cloud Shell
    • Select Actions > Architecture
    • Choose the desired architecture (this is a per-user setting, not tenant-wide)

    Hope this helps somebody in the future!