One thing I’ve been caught out with in the past with OCI Gen AI is when an AI model gets retired and my apps that specifically call the model start to fail as the model is no longer available!
The fix for this isn’t particularly difficult, it’s just a case of updating the code to point to the new model name (via model_id), this can be quite stressful though when you are about to deliver a demo to a customer π«.
I was really pleased to see the introduction of model aliases (Cohere-only at this time), so rather than using a hardcoded reference to a specific model version you can now use the following aliases, which will always point to the latest version of the Cohere Command R and Coheren Command R+ models.
cohere.command-latest points to cohere.command-r-08-20204 cohere.command-plus-latest points to cohere.command-r-plus-08-2024
Full details are included in the documentation π.
Yesterday I added a new AI profile to an Oracle Autonomous Database using the Database Actions UI, when testing this new profile I received the following error:
This appears to be caused by Database Actions not creaing the profile correctly, the fix was to manually create the profile using the following SQL statement:
This created a profile named TestProfile that uses the existing saved credentials named TESTCREDS to connect to the cohere.command-r-plus-08-2024 model in the uk-london-1 region – for some reason the Database Actions UI hardcodes the region to Chicago (which is another reason to create the profile manually!)
In addition to this, it gives the profile access to all objects owned by the account named ADMIN.
Obvouisly, you’ll need to update these with the relevant values for your environment.
After doing this, it was able to successfull connect:
This afternoon I was using the OCI Cloud Shell to build a container to be pushed to the OCI Container Registry which I was then going to create an OCI Container Instance from. This is something that I’ve done countless times without any issues, as I was short of time (I’m going on holiday tomorrow) as is typical, anything that could go wrong, did π.
When running the following command from the OCICloud Shell to build the container.
docker build --tag container-name .
It returned the following error (interesting bits in bold).
Error: committing container for step {Env:[PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LANG=C.UTF-8 GPG_KEY=E3FF2839C048B25C084DEBE995E310250568 PYTHON_VERSION=3.9.21 PYTHON_SHA256=3126f59592c9b0d7955f2bf7b081fa1ca35ce7a6fea980108d752a05bb1] Command:run Args:[pip3 install -r requirements.txt] Flags:[] Attrs:map[] Message:RUN pip3 install -r requirements.txt Heredocs:[] Original:RUN pip3 install -r requirements.txt}: copying layers and metadata for container “4aa0c966251fa75dac10afc257b8c8d62aae50c45eb5dd1157d3c1cae0208413”: writing blob: adding layer with blob “sha256:5699f359aa00daa8a93b831b478fea1fe7c339396e532f13e859fb4ef92fd83f”: processing tar file(open /usr/local/lib/python3.9/site-packages/oci/addons/adk/__pycache__/agent_client.cpython-39.pyc: no space left on device): exit status 1
After much Googling (without much luck I may add!) I had a brainwave – the OCI Cloud Console only provides 5GB storage as per the documentation – perhaps I’d hit the storage limit π€:
It turned out that the majority of the storage consumed was by Docker / Podman (as a side note the Cloud Shell now uses Podman, however the Docker commands are aliased to it, so you can continue to use them).
So……it looked like I needed to do some housekeeping π§Ή.
To identify the storage used by Docker / Podman, you can run the following command:
docker system df
Which returned the following:
To free up some space I ran the following command (which is a little brute force, I may add π¨):
docker system prune -a
Using my YOLO approach, I selected y to continue which worked its magic and free’d up some space (please take heed of the warnings β οΈ).
I then had plenty of free space and could build the container successfully β
I can now enjoy my holiday, safe in the knowledge that I managed to fix this issue πΊοΈ.
In this short video, I step through how to create an Oracle Generative AI Agent and then configure a tool within the Agent to connect to a public API that performs URL shortening. This uses the new (as of July 2025) API Endpoint Calling Tool functionality within the Generative AI Agents Service.
This allows users to ask the agent to shorten a URL, the agent then calls a public API that can shorten URLs (https://cleanuri.com/docs) and returns a shortened URL to the user.
The Generative AI Agent service in OCI recently added the ability to add a SQL Tool, this enables an agent to generate a SQL query and optionally run the query against a database and return the results of the query to the agent π€. I created a short video that steps through how to use a SQL Tool βοΈ with an agent, which can be found here πΌ.
More recently (mid-July 2025) the SQL Tool has been further enhanced so that responses include the following:
The raw output of the SQL query
A conversational “LLM style” response
Previously a SQL Tool would only return the raw output of the SQL query, I found this quite useful as I could use Python packages such as matplotlib to visualise results, as of mid-July responses from the agent also include an LLM style conversational response, for example (taken from my agent that queries a database of bird sightings π¦ ):
Raw Output of SQL Query
Conversational LLM Style Response
I’ve put together a short Python script that demonstrates how to get access to this data from a response, I typically use Streamlit as a front-end for the demo agents that I build, however to keep things simple, we’ll use the good old “shell” for this demo!
To use this script you’ll need to update the following:
textinput – update this to reflect the question to ask your agent, unless your agent is knowledgeable on bird sightings π€£, this will need to update this
service_ep – this is the service endpoint, of which there is a different endpoint for each OCI region – if your agent resides in the UK South region, you don’t need to change this π
Finally make sure you have the latest version of the OCI SDK for Python, to upgrade to the latest version run the following command –
pip3 install oci --upgrade
When run the output should look something like this:
Here is an example of how I’ve used matplotlib (within a Streamlit front-end) to visualise results using the raw output of the SQL query.
As you can see below, it returns the conversational response, I then take the raw SQL output and use matplotlib to make it look pretty π – I may put together a post on this too.
This short video demonstrates how to connect to a compute instance in OCI that does not have a public IP address using the OCI Bastion service π.
If you’d like to use OCI Bastion to connect to a Windows compute instance π₯οΈ, check out the following blog post which includes a step-by-step guide π.
This short video (a whole 4 mins! β±οΈ) explains the value of using OCI Security Zones and steps through the process of creating a Security Zone that blocks creation of public Object Storage Buckets.
I often use Streamlit to create quick customer demo’s and PoCs for OCI Generative AI Agents. One thing that is really useful is the ability to run a Streamlit app within a container instance rather than locally on my laptop – which is ideal when I need to quickly give others access to the apps that I have built.
Here is a quick guide as to how to take a Streamlit app and run this within an OCI Container Instance π.
Step 1 – Ensure Container Instances have access to the Gen AI Agent service and Container Registry β
To do this we will need to create a Dynamic Group within OCI IAM, with the following rule:
ALL {resource.type='computecontainerinstance'}
This rule will ensure that every Container Instance within the tenancy is added to the Dynamic Group, which in this example is named “ContainerInstances” – how original! In the real-world, you may want to be more specific and specify a single container instance or Compartment as a member of the Dynamic Group.
Now that the Dynamic Group has been created, we need to create a Policy that provides this group (e.g. all container instances within the tenancy) access to pull images from the OCI Container Registry and also grant it access to the OCI Generative AI Agents service, the reason for the latter is that we will use Resource Principal authentication to authenticate the container instance to the service, rather than the API Keys for a specific user account (which is safer as we won’t need to include any keys within the container image! π).
The policy should have the following two statements:
Allow dynamic-group ContainerInstances to read repos in tenancy
Allow dynamic-group ContainerInstances to manage genai-agent-family in tenancy
Now that we’ve got the Dynamic Group and Policy created, we can move on to Step 2!
Step 2 – Obtain an auth token and get the tenancy namespace β
An auth token is required to authenticate to the OCI Container Registry service, which is required when pushing the container image to the registry.
To create an Auth Token, do the following:
Make sure that you copy the Auth Token somewhere safe as you will not be able to re-retrieve it after creation βοΈ.
We now need to get the tenancy namespace, which is required to authenticate to the Container Registry, this can be obtained as follows:
Now onto Step 3 π
Step 3 – Create a Container Image of the Streamlit App β
The code that I will use for the Streamlit App can be found on GitHub, this is a basic app that connects to an OCI Generative AI Agent and allows a user to ask the agent questions:
Once you have this, two additional files are required to create the container image:
requirements.txt, which should contain the following and includes the Python packages required to run the Streamlit app:
streamlit
oci
…and Dockerfile (no file extension required!), which is used to create the container image. This will launch the Streamlit app listening on port 80. Ensure that you update the name of the Python script (in this case OCI-GenAI-Agent-Streamlit.py) to reflect the name of the script you need to run.
Place the requirement.txt, Dockerfile and Python script into a single directory:
…and then zip this up.
Now login to the OCI Console, launch Cloud Shell, upload the zip file and uncompress (this is a quick way to transfer the files).
We can now create the container image and upload this to the container registry, to do this run the following commands – make sure you run these from the directory that has been un-zipped, which contains the Streamlit app.
You may also need to update lhr.ocir.io to the correct endpoint for the container registry in your tenancies region, a full list of endpoints can be found here.
It will then prompt for your password, for this you will need to enter the Auth Token π« obtained in Step 2 (you did save this, right?)
Here’s a short run-through of this:
Next step is to build the container image and upload this to the container registry, you will need to run the following commands to do this.
Make sure that you update the endpoint (lhr.ocir.io) if needed and namespace (lrdkvqz1i7e6). This command will build the container image and tag it with the name streamlit:latest – this command needs to be run from the un-zipped directory that contains the Streamlit app files.
Once it has built, it can be pushed to the OCI Container Registry using the following command:
Step 4 – Create a container instance from the container image β
We are nearly there π, the final step is to create a container instance from the container image that we have just pushed to the container registry.
To do this, you’ll need a Virtual Cloud Network (VCN) that has a public subnet (so that we can make the instance available over the Internet π), if you don’t have one, you can use the VCN Wizard to quickly do this, as documented here.
Make sure you have a Security List π entry that permits access to the public subnet within the VCN on port 80 – in my case from any public IP address, but you may want to restrict this to specific public IP addresses.
Once you have confirmed that you have a VCN in place, we can go through the process of creating the container instance using the container image that we just created.
I’ve used the default settings for creating a container instance, in the real-world, you’d need to select an appropriate compute shape (CPU/memory).
Grab the public IP address assigned to the container instance and open this in your browser of choice, the Streamlit app should open (all being well!).
You may want to create a DNS entry and point this towards the public IP, to make it easier to access.
Also in my final disclaimer, for anything but quick and dirty demo’s you should run this over SSL with authentication too! An OCI Load Balancer can be used to do SSL termination and Streamlit provide a useful guide on performing authentication, which can be found here.
I attended a security focussed hackathon with two of my immensely talented colleagues recently (James Patrick and Hussnan Haider) π§ .
One of the challenges we ran into when configuring identity federation between OCI and a separate trusted identity provider (such as Microsoft Entra ID or Okta), is that users had to perform MFA twice – once for the trusted identity provider and then for OCI IAM, this is obviously not ideal for users, it was super frustrating for us π«!
I’ve put together a short video that runs through the solution we put together to ensure that MFA within OCI IAM is bypassed when a separate federated identity provider is used for authentication πΌ.
Key thing to point out here, is that the federated identity platform will be wholly responsible for MFA in this case, therefore it’s critical that this has been configured so that users require MFA for authentication, otherwise you have users authenticating to OCI using a single factor, which is not good π±!
For further background on how to configure identity federation between OCI IAM and Microsoft Entra ID/Azure AD, check out my two previous posts on this topic.