• Upload a file to OCI Object Storage using a Pre-Authenticated Request (PAR) 📁

    OCI Object Storage has a notion of a Pre-Authenticated Request, this gives users access to a bucket or an object (file) without having provide their sign-on credentials 🪪 – all is needed is a single URL (which can have an expiration time/date on).

    I’ve used PAR’s to provide read access to specific objects (files) within a storage bucket, this has been useful to quickly (and relatively securely) share content.

    I recently needed to provide a user the ability to upload content to a storage bucket using a PAR, to do this I configured PAR on a bucket as follows 🪣:

    However, after creating the PAR on the bucket and getting the URL, I was at a loss as to how to upload files to the bucket. If I browsed to the URL in a browser, it simply listed the files within the bucket with no visual means to upload (I was expecting a nice upload button!).

    I couldn’t see any way to upload files using the OCI CLI either, after much head-scratching and experimentation it turned out that the easiest way upload a file is to use Curl

    Here is the command that I used:

    curl -v -X PUT --data-binary '@/Users/brendan/Downloads/MyFile.zip' PAR URL/MyFile.zip
    

    You need to include the path to the file to upload (after the @ sign). The PAR URL provided by OCI and finally the name to give the uploaded file within the storage bucket.

    Running this command successfully uploaded the file to the bucket that the PAR had been created for – result!

  • Create a Machine Learning Model in Less Than 10 Minutes using Oracle AutoML ⏱️

    As this was quite a large topic for a blog post I decided to record a video instead. In this video I go through the process of…

    1. Loading a sample dataset that contains information on employee attrition into an Autonomous Database 📊
    2. Creating a machine learning model using this data with Oracle AutoML 🧠
    3. Calling the machine learning model using a Python script 🐍

  • Connecting to OCI Object Storage using S3 Browser 🪣

    OCI Object Storage has an Amazon S3 compatible API, which got me thinking that I could likely connect to it using a GUI client, such as S3 Browser. After lots of trial and error I finally managed to configure S3 Browser to connect to OCI Object Storage.

    Below are the steps that I took to get this working:

    Step 1 – Obtain the Object Storage Namespace for the tenancy 🪣

    The Object Storage Namespace is required to figure out the REST endpoint URL to connect to OCI Object Storage and can be obtained via the Cloud Console > Governance & Administration > Tenancy Details

    My namespace is below and begins with lrdkvq

    Step 2 – Create a secret key 🔑

    I then needed to create a secret key, this is used to autheticate to OCI Object Storage, a secret key can be created using the Cloud Console via Profile (icon in the top right) > My Profile > Customer secrey keys > Generate secret key.

    Give the secret key a memorable name and remember to copy the key before closing the window as it will not be shown again.

    In the list of secret keys, hover over the Access Key section for the secret key that you have just created and copy this too.

    Step 3 – Configuring S3 Browser ⚙️

    Launch S3 Browser and add a new account, select S3 Compatible Storage from the Account type dropdown.

    This then unlocks some additional options:

    For the REST Endpoint take the namespace that you obtained in Step 1 and use this as the first part of the URL, follow this with compat.objectstorage.REGION CODE.oraclecloud.com, for example my URL looks like this:

    lrdkvqz1i7g9.compat.objectstorage.uk-london-1.oraclecloud.com

    To obtain the region code for your OCI tenancy, use this reference.

    Then enter the Access Key ID and Secret Access Key obtained in Step 2. Secret Access Key is the key that is only displated once and Access Key ID is the Access Key obtained from the list of customer secret keys.

    Step 4 – Connect 🔌

    I then saved this configuration and connected 😀.

    This is a nice (and a little more user-friendly) way to interact with Object Storage without having to use the OCI Console / APIs.

  • Unable to Create a Mount Target in OCI ❌

    A customer contacted me a few days ago as they were unable to create a Mount Target within the File Storage service within OCI, they had two Mount Targets provisioned within their OCI tenancy and were attempting to create a third, when doing this they were receiving the error:

    “File System was created successfully but Mount Target creation failed because of error: “The following service limits were exceeded: mount-target-count. Request a service limit increase from the service limits page in the console. “. To enable access to the File System, associate it with an existing Mount Target by adding an Export to it.”

    Their PAYG OCI tenant had a limit of 2 x Mount Targets, however when I looked at the documentation the limit is 2 per tenant per Availability Domain, therefore in theory they could have up to 6 Mount Targets (as there are 3 x Availability Domain’s within the region they are using).

    It turned out that they were not given the opportunity to specify the Availability Domain when creating the Mount Points and the two previous Mount Points they had created resided within Availability Domain 1. The reason for this is that they had created the Mount Targets when creating a File System and when creating them this way, it doesn’t provide an option to specify the Availablity Domain to create the Mount Target within (see screenshot below).

    To work around this they created the 3rd Mount Target manually via Storage > File Storage > Mount Targets (within the OCI Console), specifying Availability Domain 2 (UK-LONDON-1-AD-2).

    This was created successfully…..they then created the File System but this time selected an existing Mount Target (MountTarget3), rather than having the OCI Console automagically create a new Mount Target for them.

    This allowed them to successfully create a third Mount Target and File System 🙌.

  • Supercharge the OCI CLI using Interactive Mode 🏎️

    The OCI CLI is a fantastic tool that makes administering OCI a breeze! But with some many difference commands and parameters it can sometimes be a little unwieldy 😩.

    I recently discovered that the OCI ClI has an interactive mode, which greatly simplifies using it – for me this has meant less time with my head stuck in the documentation and more time actually getting things done!

    Using interactive mode is a breeze, you simply launch it using oci -i.

    Once you’ve done this, start typing the name of the command you wish to use, and it will provide auto-complete suggestions. In the example below I typed ai, it then suggested the relevant AI services in OCI that I can interact with.

    If I then select Vision for example, it provides a full list of all the actions available.

    If I wanted to use OCI AI Vision to analyse an image stored within object storage, I select the appropriate command (analyze-image-object-storage-image-details).

    It then provides details of all the parameters (those that are required are denoted by a *). I can then build up my command and run this…..how cool!

    Hopefully this helps you to save as much time as it did me 😎.

  • Detect anomalies in data using Oracle Accelerated Data Science (ADS) 🧑‍🔬

    Some time ago I wrote about issues with the reliability of my Internet connection – The Joys of Unreliable Internet, one thing that came out of this was a script that I run every 30 minutes via a Cron job on my Raspberry Pi that checks the download speed of my internet connection and writes this to a CSV file 🏃.

    My Internet has been super-stable since I wrote this script – typical eh! However, I have a lot of data collected so I thought that I’d attempt to detect anomalies in this data, for example does my Internet slow down on specific day/times 🐌.

    Here is what the CSV file that I capture the data in looks like – I have a column for datetime and one for the download speed.

    I noticed that the Oracle Accelerated Data Science (or ADS for short) Python module can detect anomalies in time series data so would be perfect to analyse my Internet speed data with.

    The module can be installed using the following command:

    python3 -m pip install "oracle_ads[anomaly]"
    

    Once installed you can run the following to initialize a job.

    ads operator init -t anomaly
    

    This creates a folder within the current directory named anomaly with all of the files required to perform anomaly detection. I copied the CSV file with my Internet speed data into this folder (Speedtest.csv).

    I then opened the anomaly.yaml file within this directory – this contains the configuration settings for the job.

    I updated the template anomaly.yaml file as follows:

    I did the following:

    • Specified the name of the datetime column (Date)
    • Specified the target column, which include the data points to be analysed (Speed)
    • Set the location of the file containing the data to analyse (Speedtest.csv)
    • I also specified the format of the datetime column (using standard Python notation) – full documentation on this can be found here.

    I saved anomaly.yaml and then ran the following command to run the anomaly detection job:

    ads operator run -f anomaly.yaml
    

    Top tip – if you are running this on MacOS and receive an SSL error, you’ll likely need to run Install Certificates.command which can be found within the Python folder within Applications .

    The job took a few seconds to run (I only had 200KB of data to analyse), it created a results folder within the anomaly folder, within this area two files – a report in HTML format and a CSV file containing details of all of the anomalies detected.

    The report looks like this (the red dots are the anomalies detected).

    Full details of all anomalies can be found in the outliers.csv file, this also contains a score (the higher the number, the worse).

    This identified several days (along with the timeslots) that my Internet speed varied significantly from the average 📉.

    I’ll probably run this again in a few months to see if I can spot any patterns such as specific days or timeslots that download speed varies from the norm.

    Hope you all have as much fun as I did anomaly detecting! 🔎

  • Assessing the security posture of an OCI tenant 🔒

    I previously wrote about how ShowOCI can be used to automagically document the configuration of an OCI tenant.

    My next top tip is to run the OCI Security Health Check against your tenant. This tool compares the configuration of a tenant against the CIS OCI Foundations Benchmark and reports any issues that require remediation 🔐.

    In today’s risky world where security breaches are a regular occurrence, it’s critical that you assess your security posture on a regular basis and perform any required remediation to ensure that you are a step ahead of the attackers – this is where the OCI Security Health check makes this a lot simpler for you (for your OCI workloads at least 😉.)

    Instructions on how to run the assessment can be found here. I had an issue downloading the Zip file that contains the assessment scripts (I ran into a 404 error), the correct link is currently this (as of July 2024). Should this link not work, the folder within the repo that should contain the Zip file can be found here.

    I ran this against my test tenancy using the Cloud Shell (it can also be run from a compute instance), with the following commands:

    Step 1 – Download and Unzip the Assessment Scripts ⬇️

    wget https://github.com/oracle-devrel/technology-engineering/raw/main/security/security-design/shared-assets/oci-security-health-check-standard/files/resources/oci-security-health-check-standard-251104.zip
    unzip oci-security-health-check-standard-251104.zip
    

    Step 2 – Run the Assessment 🏃

    cd oci-security-health-check-standard-251104
    chmod +x standard.sh
    ./standard.sh
    

    Step 3 – Inspect the Findings 🔎

    Within the directory that the script is run from a folder is created that stores the output of the assessment:

    In my case this was brendankgriffin_20240712102613_standard. This directory contained the following files:

    I created a Zip file of this directory to make it easier to transfer it to my local machine for analysis using the following command:

    zip -r SecurityAssessmentOutput.zip brendankgriffin_20240712102613_standard/
    

    This created a ZIP file name SecurityAssessmentOutput.zip with the contents of the output folder (brendankgriffin_20240712102613_standard). I transferred this to my local machine using the download option within the Cloud Console.

    I could then open these to review the findings, the first file I opened was standard_cis_html_summary_report.html, which contains a summary of the findings of the assessment.

    It didn’t take too much scrolling to start to see some red! ⛔️

    Clicking into the identifier of a finding (e.g. 6.2) provides additional background and context, which is useful for understanding the finding in greater detail and helping with remediation planning.

    Each finding includes a link to the respective CSV file, where you can get additional details on the affected resources/configurations – below you can see a list of the resources that I naughtily created in the root compartment 🤦‍♂️.

    My recommendation would be to run the Security Assesment regularly (e.g. monthly), to proactively identify and resolve any security issues.

    That’s all for now 👋.

  • Documenting an OCI tenant using ShowOCI 📜

    I was speaking to a customer recently who wanted to document the resources within their OCI tenancy. OCI Tenancy Explorer provides details of the services deployed within an OCI tenant, however there isn’t any way to export the information that this presents – ShowOCI to the rescue!

    ShowOCI is a reporting tool which uses the Python SDK to extract a list of resources from a tenant. Output can be printer friendly, CSV files or JSON file – this is an ideal way to document the resources deployed and configuration within an OCI tenancy.

    This could potentially also be used as a low-tech way to do drift-detection, e.g. take a baseline, and compare over time to detect any drift.

    ShowOCI can be executed directly from within the Cloud Shell making it simple and quick to run 🏃.

    To execute ShowOCI from within a Cloud Shell (using an account with administrative permissions to the tenant), run the following commands (taken from here):

    Step 1 – Clone from OCI Python SDK Repo and Create symbolink link

    git clone https://github.com/oracle/oci-python-sdk
    ln -s oci-python-sdk/examples/showoci .
    

    Step 2 – Change Dir to ShowOCI

    cd showoci
    

    Step 3 – Run ShowOCI: Outputting all resources to CSV files prefixed with “MyTenant”

    python3 showoci.py -a -csv MyTenant
    

    There are numerous other options for running ShowOCI, for example you can get it to only include specific resources types such as compute, some of which are demonstrated here all options are presented when running ShowOCI.py without any parameters.

    After the script has run, I had a number of CSV files with the ShowOCI directory that contain details of my tenant.

    MyTenant_all_resources.csv contains high-level details of all resources within the tenant analysed (not all columns are shown):

    There is also a separate CSV file for each type of resource that provides further details, below is an excerpt from MyTenant_compute.csv which shows all of my compute instances (not all columns are shown).

    Happy tenant reporting!

  • Calling the Oracle Generative AI Service with Python 🐍

    Oracle recently enabled the Generative AI Service within the Frankfurt region in EMEA, I’ve been playing around with this using the Chat Playground which provides an easy way to experiment with the GenAI capabilities without writing any code. It’s also possible to tweak various parameters such as output tokens and temperature, which is fantastic for quick and dirty experimentation within a browser 🧪.

    In the example below, I used the service to summarise a Blog post into bullet points:

    One super-useful feature included within the playground is the ability to generate code (Python or Java) to call the service, as a wannabee-coder anything that saves me time and brain power is always welcome 🧠.

    If you click the View Code button, you can view the code that was auto-generated based on the request that was created using the playground.

    I took this code and ran it within my test tenant however I ran into a small issue parsing the output, here is an example of the output that is written to the console by the script:

    I wanted to simply output the text generated by the request to the GenAI service rather than all of the other information returned (such as the input request and headers).

    To do this I ran the following commands to convert the output from JSON to a Python dictionary and then printed out the output message (which has an index of 1 – the original request has an index of 0). I placed this at the bottom of the auto-generated script

    # Convert JSON output to a dictionary
    data = chat_response.__dict__["data"]
    output = json.loads(str(data))
    
    # Print the output
    print("----------------")
    print("Summary Returned")
    print("----------------")
    print(output["chat_response"]["chat_history"][1]["message"])
    

    Here is the script in action:

  • Detecting faces and obscuring them with OCI Vision 🫣

    I’ve previously written about how to use OCI Vision to perform image classification and object/text detection:

    For my next challenge, I wanted to use the face detection capabilities within OCI Vision, however rather than simply drawing a bounding box on the faces detected within an image (as demonstrated below) I wanted to obscure/hide any faces detected within an image – which would be useful for privacy reasons.

    I put together a script using Python with OpenCV and NumPy to achieve this, which does the following:

    1. Converts a local image on my machine to Base64 (imagepath variable)
    2. Submits this to the OCI Vision (face detection) API
    3. Returns details of all faces detected within an image
    4. Uses OpenCV and NumPy to take the normalized_vertices of the faces detected within the image (taken from the response) and obscures the faces
    5. Saves the image with the obscured faces (using the imagewritepath variable)

    Here is an example output image, with the faces obscured (the colour used can be changed).

    The script itself can be found below and on GitHub.

    To run this you’ll need to update the imagepath and imagewritepath variables, you’ll also need to include your Compartment ID within compartment_id (within the Detect faces section).

    import base64
    import oci
    import cv2
    import numpy as np
    
    imagepath = "/Users/User/Downloads/Faces.png" # path of the image to analyse
    imagewritepath = "/Users/User/Downloads/FacesHidden.png" # image to create with faces(s) hidden
     
    def get_base64_encoded_image(image_path): # encode image to Base64
        with open(image_path, "rb") as img_file:
            return base64.b64encode(img_file.read()).decode('utf-8')
    
    image = get_base64_encoded_image(imagepath)
    
    # Authenticate to OCI
    config = oci.config.from_file()
    ai_vision_client = oci.ai_vision.AIServiceVisionClient(config)
    
    # Detect faces
    analyze_image = ai_vision_client.analyze_image(
        analyze_image_details=oci.ai_vision.models.AnalyzeImageDetails(
            features=[
                oci.ai_vision.models.ImageObjectDetectionFeature(
                    max_results=10,feature_type="FACE_DETECTION")],
            image=oci.ai_vision.models.InlineImageDetails(
                source="INLINE",
                data = image),
            compartment_id="ENTER COMPARTMENT ID"))
    
    analysis = analyze_image.data
    Faces = analysis.detected_faces
    print("-Analysis complete, detected: " + str((len(Faces))) + " faces")
    
    # Used by the for loop below to change the logic of reading the image, if greater than 1 face is processed we must update the updated image rather than the original
    FaceNumber = 1
    
    # Loop through each face detected, remove and save to a new image
    for Face in Faces:
        print("-Processing face number " + str(FaceNumber))
        if FaceNumber == 1:
            # Read the image
            img = cv2.imread(imagepath)
        else:
             # Read the updated image (required if >1 faces are detected)
            img = cv2.imread(imagewritepath)   
    
        # Define the polygon vertices using the first object detected in the image
        vertices = np.array([((Face.bounding_polygon.normalized_vertices[0].x), (Face.bounding_polygon.normalized_vertices[0].y)), ((Face.bounding_polygon.normalized_vertices[1].x), (Face.bounding_polygon.normalized_vertices[1].y)),
                         ((Face.bounding_polygon.normalized_vertices[2].x), (Face.bounding_polygon.normalized_vertices[2].y)),((Face.bounding_polygon.normalized_vertices[3].x), (Face.bounding_polygon.normalized_vertices[3].y))])
    
        # Convert the normalized vertices to pixel coordinates
        height, width = img.shape[:2]
        pixels = np.array([(int(vertex[0] * width), int(vertex[1] * height)) for vertex in vertices])
    
        # Fill the face with a solid colour
        cv2.fillPoly(img, [pixels], [255,255,255])
    
        # Save the image
        cv2.imwrite(filename=imagewritepath,img=img)
    
        # Increment the face count by 1
        FaceNumber += 1
      
    print("-Finished!")
    

    The script provides some basic logging messages to report progress: