Oracle recently enabled the Generative AI Service within the Frankfurt region in EMEA, I’ve been playing around with this using the Chat Playground which provides an easy way to experiment with the GenAI capabilities without writing any code. It’s also possible to tweak various parameters such as output tokens and temperature, which is fantastic for quick and dirty experimentation within a browser 🧪.
In the example below, I used the service to summarise a Blog post into bullet points:
One super-useful feature included within the playground is the ability to generate code (Python or Java) to call the service, as a wannabee-coder anything that saves me time and brain power is always welcome 🧠.
If you click the View Code button, you can view the code that was auto-generated based on the request that was created using the playground.
I took this code and ran it within my test tenant however I ran into a small issue parsing the output, here is an example of the output that is written to the console by the script:
I wanted to simply output the text generated by the request to the GenAI service rather than all of the other information returned (such as the input request and headers).
To do this I ran the following commands to convert the output from JSON to a Python dictionary and then printed out the output message (which has an index of 1 – the original request has an index of 0). I placed this at the bottom of the auto-generated script
# Convert JSON output to a dictionary
data = chat_response.__dict__["data"]
output = json.loads(str(data))
# Print the output
print("----------------")
print("Summary Returned")
print("----------------")
print(output["chat_response"]["chat_history"][1]["message"])
For my next challenge, I wanted to use the face detection capabilities within OCI Vision, however rather than simply drawing a bounding box on the faces detected within an image (as demonstrated below) I wanted to obscure/hide any faces detected within an image – which would be useful for privacy reasons.
I put together a script using Python with OpenCV and NumPy to achieve this, which does the following:
Converts a local image on my machine to Base64 (imagepath variable)
Submits this to the OCI Vision (face detection) API
Returns details of all faces detected within an image
Uses OpenCV and NumPy to take the normalized_vertices of the faces detected within the image (taken from the response) and obscures the faces
Saves the image with the obscured faces (using the imagewritepath variable)
Here is an example output image, with the faces obscured (the colour used can be changed).
The script itself can be found below and on GitHub.
To run this you’ll need to update the imagepath and imagewritepath variables, you’ll also need to include your Compartment ID within compartment_id (within the Detect faces section).
import base64
import oci
import cv2
import numpy as np
imagepath = "/Users/User/Downloads/Faces.png" # path of the image to analyse
imagewritepath = "/Users/User/Downloads/FacesHidden.png" # image to create with faces(s) hidden
def get_base64_encoded_image(image_path): # encode image to Base64
with open(image_path, "rb") as img_file:
return base64.b64encode(img_file.read()).decode('utf-8')
image = get_base64_encoded_image(imagepath)
# Authenticate to OCI
config = oci.config.from_file()
ai_vision_client = oci.ai_vision.AIServiceVisionClient(config)
# Detect faces
analyze_image = ai_vision_client.analyze_image(
analyze_image_details=oci.ai_vision.models.AnalyzeImageDetails(
features=[
oci.ai_vision.models.ImageObjectDetectionFeature(
max_results=10,feature_type="FACE_DETECTION")],
image=oci.ai_vision.models.InlineImageDetails(
source="INLINE",
data = image),
compartment_id="ENTER COMPARTMENT ID"))
analysis = analyze_image.data
Faces = analysis.detected_faces
print("-Analysis complete, detected: " + str((len(Faces))) + " faces")
# Used by the for loop below to change the logic of reading the image, if greater than 1 face is processed we must update the updated image rather than the original
FaceNumber = 1
# Loop through each face detected, remove and save to a new image
for Face in Faces:
print("-Processing face number " + str(FaceNumber))
if FaceNumber == 1:
# Read the image
img = cv2.imread(imagepath)
else:
# Read the updated image (required if >1 faces are detected)
img = cv2.imread(imagewritepath)
# Define the polygon vertices using the first object detected in the image
vertices = np.array([((Face.bounding_polygon.normalized_vertices[0].x), (Face.bounding_polygon.normalized_vertices[0].y)), ((Face.bounding_polygon.normalized_vertices[1].x), (Face.bounding_polygon.normalized_vertices[1].y)),
((Face.bounding_polygon.normalized_vertices[2].x), (Face.bounding_polygon.normalized_vertices[2].y)),((Face.bounding_polygon.normalized_vertices[3].x), (Face.bounding_polygon.normalized_vertices[3].y))])
# Convert the normalized vertices to pixel coordinates
height, width = img.shape[:2]
pixels = np.array([(int(vertex[0] * width), int(vertex[1] * height)) for vertex in vertices])
# Fill the face with a solid colour
cv2.fillPoly(img, [pixels], [255,255,255])
# Save the image
cv2.imwrite(filename=imagewritepath,img=img)
# Increment the face count by 1
FaceNumber += 1
print("-Finished!")
The script provides some basic logging messages to report progress:
I ran into an issue recently with OCI Data Labelling, Generate records was failing with the following error: Content-Type Validation Failed ❌.
I was using data labeling to label some images that I was going to use to train a custom AI vision classification model.
In my specific case, the dataset comprised images stored in an OCI Object Storage bucket. I had uploaded the images to the bucket using the OCI CLI, specifically the following command which uploaded all files within a specific directory on my local machine to a named bucket:
oci os object bulk-upload --bucket-name Pneumonia-Images --src-dir "/Users/bkgriffi/OneDrive/Development/train/PNEUMONIA"
A helpful colleague advised me to manually set the content type at upload to image/jpeg, below is the updated command that I run that uploads the images and sets the correct content type.
For an upcoming AI demo, I need to demonstrate moving some images from Azure Blob Storage to OCI Object Storage so that these could be trained using a custom model with OCI AI Vision. I was looking for a nice demo-friendly way to automate this and stumbled across Rclone.
Rclone is a command-line program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors’ web storage interfaces. Over 70 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.
Within a few minutes I was able to configure Rclone to connect to Azure Blob Storage and OCI Object Storage and was copying files between the two with a single command 😮.
To get started I installed Rclone using the instructions here – for macOS, this was as simple as running:
Once I’d installed, I typed rclone config and then n to create a new host, this walked me through the process of creating a connection – including selecting the type of storage (there are 55 to choose from, including OCI and Azure), the storage account to connect to and how to authenticate, I did this for OCI and then repeated the process for Azure.
In terms of authentication, I selected option 2 for OCI, which uses my OCI config file within ~/.oci/config, more details on how to create a config file can be found here.
For Azure I opted to use the access key to authenticate to the storage account:
Once I’d created the connections, I could inspect the configuration file that Rclone had created – the location of which can be found by running rclone config file.
Below is the contents of the configuration file that I have 📄.
I could view the contents of each of the storage accounts (OCI and Azure are the names that I gave the respective configurations, which need to be used with the command):
Contents of OCI storage account
Contents of Azure storage account
Finally, I ran the command below to copy the content of the images directory (container) within Azure to the Images bucket within OCI.
With my new focus on all things Oracle Cloud Infrastructure (OCI) I’ve not been giving PowerShell much love recently.
I knew that PowerShell modules for OCI were available, however hadn’t had an excuse to use them until somebody asked me how they could use PowerShell with OCI Object Storage 🪣.
To get started you can run the following to install all of the modules (as I did):
Install-Module OCI.PSModules
If you’d prefer to only install the specific modules that you need, this can be done by running the following – replacing ServiceName with the name of the service whose module you’d like to install, the ServiceName for each service can be found within the Cmdlet reference.
Install-Module OCI.PSModules.<ServiceName>
Before you use the PowerShell modules, you’ll need to ensure that you have an OCI configuration file (which is used for authentication), instructions on creating one can be found here.
In my first example, I’m going to use the OCI PowerShell Module for Object Storage to upload a file to a storage bucket, prior to running this command I need to know the Namespace for Object Storage within my tenant as the Cmdlet requires this. This is listed within the details page for each storage bucket (as highlighted below):
Once I had this, I ran the following to upload the file named DemoObject.rtf to the bucket named data, within the namespace I obtained above.
One point to note is that I’m running this on a Mac, if you are running on Windows you’ll need to use the correct file path format.
Once I’d ran the command I could see it uploadec within the OCI Console:
In the more advanced example below, the script loops through a speific folder (set by the $Folder variable) and uploads all files within it to the data bucket.
I was recently asked by a customer to perform a review of their OCI tenancy, following the principle of least privilege I stepped them through the process of creating a user account that granted me read-only access to their tenancy, meaning that I can see how everything has been setup, but I cannot change anything.
Following Scott Hanselman’s guidance of preserving keystrokes I thought I’d document the process here as I’ll no doubt need to guide somebody else through this in the future 😀.
The three steps to do this are below👇
Step 1 – Create a user accountfor the user 👩
Yes, I know that this is obvious however I’ve included it here for completeness 😜. A user can be added to an OCI tenancy via Identity & Security > Domains > (Domain) > Users > Create user.
Ensure that the email address for the user is valid as this will be used to confirm their account. The user does not need to be added to any groups at this point (we’ll do that in the next step).
If the user who you need to grant read-only access to the tenancy already exists, this step can be skipped.
Step 2 – Create a group 👨👩
OCI policies do not permit assigning permissions directly to a user, so we will create a group which will be assigned read-only permissions to the tenancy.
A group can be created via Identity & Security > Domains > (Domain) > Groups > Create group. I used the imaginative name of Read-Only for the group in the example below.
Once a group has been created, add the user that you wish to grant read-only permissions to the tenancy (in this case Harrison):
Step 3 – Create a policy to grant read-only access to the tenancy 📃
We are nearly there, the penultimate step is to create a policy that grant the group named Read-Only with read permissions to the tenancy, a policy can be created via Identity & Security > Policies > Create Policy.
I used the following policy statement – allow group Read-Only to read all-resources in tenancy
One thing to note, if you have multiple domains within the tenancy and the user account you wish to give read-only access to the tenancy doesn’t reside within the default domain, you’ll need to specify the domain within the policy, in the example above if the user was a member of the domain CorpDomain, the policy statement should be updated to read as follows:
Allow group ‘CorpDomain’/’Read-Only’ to read all resources in tenancy
I’ve previously wrote about how I use OCI Bastion and Site to Site VPN to connect to my VM instances running within OCI that do not have a public IP address. There is also a third option, which I (rather embarrassingly) only recently found out about.
It’s possible to use the OCI Cloud Shell (which runs within a web browser) to connect via SSH to a VM instance that is attached to a private subnet (therefore has no public IP address).
To do this, launch Cloud Shellfrom within the OCI Console
Select the Network drop-down menu and then Ephemeral private network setup
Select the VCN and Subnet to connect to (the one that contains the instance you wish to connect to) and then click Use as active network
Wait a minute or two! When the network status updates to Ephemeral the Cloud Shell is connected directly to the VCN and subnet selected.
You can SSH into a VM instance within the subnet using it’s private IP address.
OCI API Gateway includes native support for publishing OCI Functions, this was especially useful for me as I wanted to make my function available externally without authentication – whilst it’s possible to make an OCI Function available externally without using API Gateway, it’s not possible to make a function callable without authentication (e.g. make it available to anybody on the internet) 🔓.
I’d ran through the process of publishing an OCI Function through OCI API Gateway a couple of months ago and got it to work successfully without too much pain, earlier this week I had to do this again and ran into a few issues – I was clearly a lot brighter back then! I thought I’d capture these issues and solutions to help others and for my future self 😀.
A step-by-step guide for publishing an OCI Function through OCI API Gateway can be found here – if only I’d have read the documentation, I could have saved an hour of my life. Below are the issues I ran into and the solutions that I found ✅
❌ Issue 1 – Calls to the Function timeout ⏱️
Using Curl to call the API Gateway endpoint for the Function timed out with the following error:
curl: (28) Failed to connect to bcmd2sv4corxwehdxx4lzvrj9u.apigateway.uk-london-1.oci.customer-oci.com port 443 after 75019 ms: Couldn’t connect to server
I’d provisioned a new API Gateway into a public VCN subnet and had forgotten to allow inbound traffic on port 443 to the subnet. To resolve this, I added an ingress rule to the security list associated with the subnet allowing traffic on port 443.
❌ Issue 2 – Calls to the function generate a 500 error
Once I’d enabled port 443 inbound to the VCN subnet containing the API Gateway, I started to receive a different error when attempting to call the function using Curl (or a web browser for that matter):
“Internal Server Error”,”code”:500
To investigate this further I enabled Execution Logsfor the API Gateway Deployment and sent some further requests, I could then see the following in the logs:
With the full error being:
“Error returned by FunctionsInvoke service (404, NotAuthorizedOrNotFound). Check the function exists and that the API gateway has been given permission to invoke the function.”
Damn…….I’d forgotten to give the API Gateway permission to call the Function, hence the not authorized error 🤦♂️.
To resolve this I created a dynamic group that contained the API Gateway – actually this contains all API Gateway’s within the specified compartment.
I then created a policy to permit this dynamic group (API-DG) access to call Functions – again this rule is quite broad as it provides the dynamic group the permissions to call all functions within the tenancy. Within a production environment, you’d be a little stricter here and restrict this to a specific Function 😀.
Issue 3 – I have no patience 😀
After working through issue 1 and 2 and resolving these issues, I was still running into problems – inspecting the logs yielded the same NotAuthorizedOrNotFound error. It turns out that I needed to wait for the policy I created to come to life, about 30 minutes or so later (during this time I was frantically troubleshooting!) it started to work and public calls to my function through the API Gateway started to work 👍.
As you may gather if you’ve read any of my previous posts, one of my hobbies is collecting retro video games 🕹️.
I’ve recently catalogued my collection of games and put these into an Excel spreadsheet (we all know that Excel is the worlds most popular database!).
What I wanted to do though, is to migrate this to an Oracle NoSQL Database hosted within OCI – this is complete overkill for my needs, but a great use-case/example to help me get to grips with using NoSQL 🧠.
To do this, I needed to figure out how to:
Create an Oracle NoSQL Database table to store the data ✅
Read an Excel file (the one containing my list of retro games) using Python, which is my language of choice ✅
Write this data to an Oracle NoSQL Database table ✅
Step 1 – Creating an Oracle NoSQL Database table
I did this directly from the OCI Console, via Databases > Oracle NoSQL Database > Tables > Create table
On the table creation screen, I selected the following:
Simple input – I could then easily define my simple schema within the browser (defining the columns needed within the table).
Reserved capacity – Further details on how this works can be found here. I opted for a read/write capacity of 10 units which equates to 10KB of reads/writes per second, I only need this capacity for the initial data load so will reduce to 1 after I’ve loaded the data from Excel. I went with 1GB of storage (which is the minimum), I’m sure I won’t use more than 1MB though!
Name – I kept this simple and called the table Games.
Primary key – I named this ID of type integer, I’m going to populate this with the epoch time so that I have unique values for each row.
Columns – I only actually need two columns, Game and System. For example, an entry could be Game = Super Mario Landand System = Game Boy.
I then hit Save and within a few seconds my table was created ✅.
Step 2 – Reading data from an Excel spreadsheet
The spreadsheet with my game collection in has a separate sheet for each system, with the respective games for that system listed within the sheet.
The example below shows the PS1 games I own, as you can see there are sheets for other systems, such as Wii U and PS3.
After much investigation, I found that the easiest way to read an Excel file using Python was with the pandas and OpenPyXL libraries.
I put together the following Python script which iterates through each sheet in the Excel file, outputting the sheet name (system, such as Game Boy) and the contents of each row within the sheet (which would be a game, such as Super Mario Land).
import pandas as pd
import time
excelfilepath = '/Users/bkgriffi/Downloads/Retro Games Collection.xlsx' # Excel file to read from
excel = pd.ExcelFile(excelfilepath)
sheets = excel.sheet_names # Create a list of the sheets by name (each system has a separate sheet)
for sheet in sheets: # Loop through each of the sheets (systems)
print("----------")
print(sheet) # Print the name of the sheet (system)
print("----------")
excel = pd.read_excel(excelfilepath,header = None, sheet_name= sheet)
i = 0
while i < len(excel[0]) - 1: # Run a while loop that only runs until each row in the sheet has been processed
print(excel[0][i]) # Print the row (game)
i += 1 # Increase i so that on the next loop it outputs the next row (game) in the sheet (system)
Here is the script in action, as you can see it lists the system (sheet name) and then the rows within that sheet (game), before then moving on to the next sheet.
Step 3 – Writing data to an Oracle NoSQL Database table
Now that I’d figured out how to read an Excel file with Python, the final piece of the puzzle was to write this to the Oracle NoSQL Database table.
I took the script above and incorporated it into the following:
import pandas as pd
import oci
import time
# Connect to OCI
config = oci.config.from_file()
nosql_client = oci.nosql.NosqlClient(config)
# Read Excel file
excelfilepath = '/Users/bkgriffi/Downloads/Retro Games Collection.xlsx' # Path to Excel file
excel = pd.ExcelFile(excelfilepath)
sheets = excel.sheet_names
# Write the data to the Oracle NoSQL Database table
for sheet in sheets:
print("----------")
print(sheet)
print("----------")
excel = pd.read_excel(excelfilepath,header = None, sheet_name= sheet)
i = 0
while i < len(excel[0]) - 1:
print(excel[0][i])
update_row_response = nosql_client.update_row(
table_name_or_id="GamesTable",
update_row_details=oci.nosql.models.UpdateRowDetails(
value={'ID': int((str(time.time()).split(".")[0])), 'Game': excel[0][i], 'System': sheet}, # This is the data to write to the table, the value for ID may look a little scary, all this is doing is passing the UNIX epoch time, I did this to ensure that each row had a unique ID, which is needed as the ID column is the primary key
compartment_id="Replace with the OCI of the compartment that contains the Oracle NoSQL Database table",
option="IF_ABSENT",
is_get_return_row=True))
i += 1
This uses the OCI Python SDK to connect to the Oracle NoSQL Database table created earlier (Games) and writes the data to it, after running the script I could verify this within the OCI Console by going to Explore data > Execute and running the default SQL statement (which returns everything in the table).
Points to note about the script:
You need to update the compartment_id and put in the value for the compartment that contains the Oracle NoSQL Database table to populate.
This script requires the OCI SDK for Python with appropriate auth in place, I wrote a quick start on this here.