I attended a security focussed hackathon with two of my immensely talented colleagues recently (James Patrick and Hussnan Haider) π§ .
One of the challenges we ran into when configuring identity federation between OCI and a separate trusted identity provider (such as Microsoft Entra ID or Okta), is that users had to perform MFA twice – once for the trusted identity provider and then for OCI IAM, this is obviously not ideal for users, it was super frustrating for us π«!
I’ve put together a short video that runs through the solution we put together to ensure that MFA within OCI IAM is bypassed when a separate federated identity provider is used for authentication πΌ.
Key thing to point out here, is that the federated identity platform will be wholly responsible for MFA in this case, therefore it’s critical that this has been configured so that users require MFA for authentication, otherwise you have users authenticating to OCI using a single factor, which is not good π±!
For further background on how to configure identity federation between OCI IAM and Microsoft Entra ID/Azure AD, check out my two previous posts on this topic.
OCI Generative AI Agents recently released the ability to query a database using natural language (similar to Select AI), more details on this new feature can be found here.
In this short video, I walkthrough the end to end process of creating an OCI Generative AI Agent and configuring this to query a database using natural language.
For reference here’s how an OCI Gen AI Agent works at a high-level.
In the real world a Gen AI Agent would use internal data that isn’t publicly accessible, however I typically don’t have access to customers data, therefore the approach of crawling their public website works well to showcase the capabilities of a Gen AI Agent and begin a conversation on real-world use-cases that use internal data π.
I recently stumbled across a fantastic Python package named Trafilatura which can reliably and easily scrape a site, enabling me to retire my hacky Python script π.
Trafilatura can be installed using the instructions here (basically pip install trafilatura).
Once it had been installed, I was able to scrape my own blog (which you are currently reading) using two commands!
The first command grabs the sitemap for https://brendg.co.uk, and writes a list of all URLs found to URL.txt.
The second command takes the URL.txt file as input and for each URL within, crawls the page and writes the contents to a text file within the folder txtfiles.
Below is an example of one of the text files that have been output, you can clearly see the text from the blog post scraped.
Such a useful tool, which will save me a ton of time β±οΈ!
I’ve been working on a project deploying an OCI Generative AI Agent π€, which I’ve previously spoken about here πΌ.
Marketing blurb – OCI Generative AI Agents is a fully managed service that combines the power of large language models (LLMs) with AI technologies to create intelligent virtual agents that can provide personalized, context-aware, and highly engaging customer experiences.
When creating a Knowledge Base for the agent to use, the only file types that are supported (at present) are PDF and text files. I had a customer that needed to add Word documents (DOCX format) to the agent, rather than converting these manually which would have taken a lifetime π£, I whipped up a Python script that uses the docx2pdf package – https://pypi.org/project/docx2pdf/ to perform a batch conversion of DOCX files to PDF, one thing to note is that the machine that runs the script needs Word installing locally.
Here is the script π
import os
import docx2pdf # install using "pip install docx2pdf" prior to running the script
os.chdir("/Users/bkgriffi/Downloads") # the directory that contains the folders for the source (DOCX) and destination (PDF) files
def convert_docx_to_pdf(docx_folder, pdf_folder): # function that performs the conversion
for filename in os.listdir(docx_folder):
if filename.endswith(".docx"):
docx_path = os.path.join(docx_folder, filename)
pdf_filename = filename[:-5] + ".pdf"
pdf_path = os.path.join(pdf_folder, pdf_filename)
try:
docx2pdf.convert(docx_path, pdf_path)
print(f"Converted: {filename} to {pdf_filename}")
except Exception as e:
print(f"Error converting {filename}: {e}")
convert_docx_to_pdf("DOCX-Folder", "PDF-Folder") # calling the function, with a source folder named DOCX-Folder and a destination folder named PDF-Folder, these folders should reside in the directory specified in line 4
Folder structure ποΈ
Source DOCX files π
Script Running π
Output PDF files
Once the documents have been converted to PDF format they could be added to an OCI Storage Bucket and ingested into the OCI Generative AI Agent.
I’ve been playing around with the OCI AI Speech service recently, one thing I really struggled with was using the AI Speech API to create a transcription job to extract the text from an audio/video file (as I needed to automate the process).
After much head scratching (…and some help from a colleague), I was able to assemble the following Python script, this provides a function named transcribe, which can be called to submit a transcription job. The following parameters are required:
inputfile – The name of the audio/video file to transcribe e.g. recording.mp3
bucket – The name of the bucket that contains the inputfile to transcribe (this is also where the JSON output of the transcription job will be stored)
compartmentid – OCID of the compartment to run the transcription job in
import oci
config = oci.config.from_file()
def transcribe(inputfile,compartmentid,bucket,namespace):
ai_speech_client = oci.ai_speech.AIServiceSpeechClient(config)
create_transcription_job_response = ai_speech_client.create_transcription_job(
create_transcription_job_details=oci.ai_speech.models.CreateTranscriptionJobDetails(
compartment_id=compartmentid,
input_location=oci.ai_speech.models.ObjectListInlineInputLocation(
location_type="OBJECT_LIST_INLINE_INPUT_LOCATION",
object_locations=[oci.ai_speech.models.ObjectLocation(
namespace_name=namespace,
bucket_name=bucket,
object_names=[inputfile])]),
output_location=oci.ai_speech.models.OutputLocation(
namespace_name=namespace,
bucket_name=bucket)))
transcribe(inputfile="Name of file to transcribe",compartmentid="OCID of the compartment to run the transcription job in",bucket="Bucket that contains the file to transcribe",namespace="Object storage namespace")
As an absolute terrible front-end developer, I’ve completely fallen in love with Streamlit – it makes building apps so simple, without the hassle of handcrafting a UI from scratch π¨.
I’m currently building an app that transcribes, summarises and translates audio using the AI capabilities of OCI (more on this in a future post ποΈ).
One of the things I wanted to do in the app is record audio, thankfully Streamlit has st.audio_input, which can record audio from a users’ microphone, however I needed a way to save the recorded audio to the server running the Streamlit app so that I could work some AI-magic on it πͺ.
It turns out that this is super-simple, the code below is for a page that has the st.audio_input widget, when a recording has been created it is saved with the filename recorded_audio.wav.
import streamlit as st
st.sidebar.title("Audio Recording App")
st.title("Record Your Audio")
st.write("Press the button to start recording and then stop when you're done.")
audio = st.audio_input("Record your audio")
if audio:
with open("recorded_audio.wav", "wb") as f:
f.write(audio.getbuffer())
st.write("Audio recorded and saved successfully!")
Here’s the page in all it’s glory:
Within the directory that the Streamlit app is run from you can see that a WAV audio file has been saved:
I’d deployed a shiny new Function to OCI, I went to test this using fn invoke and it failed with the following error:
Error invoking function. status: 502 message: Failed to pull function image
The reason for this error is that I didn’t have a Service Gatewayprovisioned within the VCN that hosted the function app – but what is a Service Gateway you may ask???
A service gateway lets a Virtual Cloud Network (VCN) privately access specific Oracle services without exposing the data to the public internet. No internet gateway or NAT gateway is required to reach those specific services. The resources in the VCN can be in a private subnet and use only private IP addresses. The traffic from the VCN to the Oracle service travels over the Oracle network fabric and never traverses the internet.
To fix this I created a Service Gateway and attached this the subnet where my function app had been deployed, this provided the function app with access to the Container Registry to pull the image that contained the function.
In my case I’d deployed the function app to a private subnet that had no connectivity to the outside world – even to OCI services π.
I then needed to add a rule to route traffic through the Service Gateway.
When I got the section to configure Azure (Task 5), I ran into an issue – it wasn’t clear exactly what I needed to populate the App Insights Workspace Resource ID setting with as it’s not covered within the documentation π€.
This setting can be obtained from the Log Analytics workspace in Azure that is created in Task 4.
Go to Settings > Properties:
Copy the Resource ID and paste this into the App Insights Workspace Resource ID setting.
Once I’d done this, I was able to successfully configure the integration and now have lots of lovely OCI audit logs within Microsoft Sentinel.
I’ve previously written about how I setup a site-to-site VPN between a Raspberry Pi and OCI, this has worked like a charm and I’ve had no issues with it. This works really well when I’m at home, but as I often travel and need a convenient way to VPN into my OCI tenancy I started exploring running OpenVPN in OCI, this would enable me to install a VPN client on my laptop/phone and conveniently VPN into my tenant from wherever I am in the world π.
There is a pre-configured marketplace image for OpenVPN available within OCI, further information on this can be found here. The one drawback to this is that it only supports deployment on x64 VM instances, I’m tight and wanted to deploy OpenVPN on a free Ampere (ARM) VM instance so that it didn’t cost me a penny πͺ.
Rather than muck about and learn how to setup OpenVPN and go through the process manually, I stumbled across this fantastic script that fully automates the configuration β .
I have a single Virtual Cloud Network (VCN) that I need access to, this VCN has a private and a public subnet, the resources that I need access to all reside within the private subnet and are not directly accessible via the Internet (hence the need for a VPN!).
Below is the end-to-end process that I followed for setting up OpenVPN in OCI.
Step 1 – Provisioned an Ampere VM instance running Ubuntu 24.04, with 1 OCPU and 6GB memory, deployed this within the public subnet within the VCN.
Step 2 – Ran the OpenVPN installation and configuration script found here, taking the defaults for everything.
Step 3 – Copied the VPN connection profile that the setup created from the OpenVPN server to my local machine (.ovpn file).
Step 4 – Before attempting to connect to the OpenVPN server I needed to open UDP port 1194 which is the port that OpenVPN listens on.
As I only have a single server within the public subnet in the VCN, I simply added an entry to the Security List associated with the public subnet, using a Network Security Group is the recommended way to do this – especially when you have multiple instances within a public subnet, however I wanted a quick and dirty solution π.
The rule I added provides access to UDP port 1194 from anywhere to the OpenVPN server within the public subnet.
Step 5 – Enable IP forwarding on the OpenVPN server, using the guidance found here.
Step 6 – Installed the client for OpenVPN from https://openvpn.net/client/, clients are available for Windows, MacOS, Linux, Android, iOS and Chrome OS, so plenty of choice!
Once the profile was imported, I could connect!
That was it – I was really impressed with the ease of setting this up, even better it doesn’t cost me a penny πͺ!
Select AI is a fantastic capability included in Oracle Autonomous Database, which enables a database to be queried using natural language rather than SQL commands, which is a godsend for somebody like me who struggles with all of that select * from malarkey!
I was going through one of my Select AI demos in preparation for a customer meeting and all my queries were failing with the error below – typical, just before I needed to demo it π«.
The reason for this error is that the Select AI profile that I was using was configured to use the cohere.commandβr-plus LLM from the OCI Generative AI Service, this LLM has been deprecated as per the documentation, therefore no longer works:
To fix this issue I needed to update (well delete and re-create!) the Select AI profile to use the newer variant of this LLM, which is cohere.command-r-plus-08-2024.
Deleting a profile – replace ProfileName with the name of the profile to delete ποΈ
BEGIN
DBMS_CLOUD_AI.DROP_PROFILE(profile_name => 'ProfileName');
END;
Re-adding the profile with the new LLM – Example β