• Supercharge the OCI CLI using Interactive Mode ๐ŸŽ๏ธ

    The OCI CLI is a fantastic tool that makes administering OCI a breeze! But with some many difference commands and parameters it can sometimes be a little unwieldy ๐Ÿ˜ฉ.

    I recently discovered that the OCI ClI has an interactive mode, which greatly simplifies using it – for me this has meant less time with my head stuck in the documentation and more time actually getting things done!

    Using interactive mode is a breeze, you simply launch it using oci -i.

    Once you’ve done this, start typing the name of the command you wish to use, and it will provide auto-complete suggestions. In the example below I typed ai, it then suggested the relevant AI services in OCI that I can interact with.

    If I then select Vision for example, it provides a full list of all the actions available.

    If I wanted to use OCI AI Vision to analyse an image stored within object storage, I select the appropriate command (analyze-image-object-storage-image-details).

    It then provides details of all the parameters (those that are required are denoted by a *). I can then build up my command and run this…..how cool!

    Hopefully this helps you to save as much time as it did me ๐Ÿ˜Ž.

  • Detect anomalies in data using Oracle Accelerated Data Science (ADS) ๐Ÿง‘โ€๐Ÿ”ฌ

    Some time ago I wrote about issues with the reliability of my Internet connection – The Joys of Unreliable Internet, one thing that came out of this was a script that I run every 30 minutes via a Cron job on my Raspberry Pi that checks the download speed of my internet connection and writes this to a CSV file ๐Ÿƒ.

    My Internet has been super-stable since I wrote this script – typical eh! However, I have a lot of data collected so I thought that I’d attempt to detect anomalies in this data, for example does my Internet slow down on specific day/times ๐ŸŒ.

    Here is what the CSV file that I capture the data in looks like – I have a column for datetime and one for the download speed.

    I noticed that the Oracle Accelerated Data Science (or ADS for short) Python module can detect anomalies in time series data so would be perfect to analyse my Internet speed data with.

    The module can be installed using the following command:

    python3 -m pip install "oracle_ads[anomaly]"
    

    Once installed you can run the following to initialize a job.

    ads operator init -t anomaly
    

    This creates a folder within the current directory named anomaly with all of the files required to perform anomaly detection. I copied the CSV file with my Internet speed data into this folder (Speedtest.csv).

    I then opened the anomaly.yaml file within this directory – this contains the configuration settings for the job.

    I updated the template anomaly.yaml file as follows:

    I did the following:

    • Specified the name of the datetime column (Date)
    • Specified the target column, which include the data points to be analysed (Speed)
    • Set the location of the file containing the data to analyse (Speedtest.csv)
    • I also specified the format of the datetime column (using standard Python notation) – full documentation on this can be found here.

    I saved anomaly.yaml and then ran the following command to run the anomaly detection job:

    ads operator run -f anomaly.yaml
    

    Top tip – if you are running this on MacOS and receive an SSL error, you’ll likely need to run Install Certificates.command which can be found within the Python folder within Applications .

    The job took a few seconds to run (I only had 200KB of data to analyse), it created a results folder within the anomaly folder, within this area two files – a report in HTML format and a CSV file containing details of all of the anomalies detected.

    The report looks like this (the red dots are the anomalies detected).

    Full details of all anomalies can be found in the outliers.csv file, this also contains a score (the higher the number, the worse).

    This identified several days (along with the timeslots) that my Internet speed varied significantly from the average ๐Ÿ“‰.

    I’ll probably run this again in a few months to see if I can spot any patterns such as specific days or timeslots that download speed varies from the norm.

    Hope you all have as much fun as I did anomaly detecting! ๐Ÿ”Ž

  • Assessing the security posture of an OCI tenant ๐Ÿ”’

    I previously wrote about how ShowOCI can be used to automagically document the configuration of an OCI tenant.

    My next top tip is to run the OCI Security Health Check against your tenant. This tool compares the configuration of a tenant against the CIS OCI Foundations Benchmark and reports any issues that require remediation ๐Ÿ”.

    In today’s risky world where security breaches are a regular occurrence, it’s critical that you assess your security posture on a regular basis and perform any required remediation to ensure that you are a step ahead of the attackers – this is where the OCI Security Health check makes this a lot simpler for you (for your OCI workloads at least ๐Ÿ˜‰.)

    I ran this against my test tenancy using the Cloud Shell (it can also be run from a compute instance), with the following commands:

    Step 1 – Download and Unzip the Assessment Scripts โฌ‡๏ธ

    wget https://github.com/oracle-devrel/technology-engineering/raw/main/security/security-design/shared-assets/oci-security-health-check-standard/files/resources/oci-security-health-check-standard-260105.zip
    unzip oci-security-health-check-standard-260105.zip
    

    Should this link not work (e.g. if the assessment has been updated), the folder within the repo that should contain the Zip file can be found here.

    Step 2 – Run the Assessment ๐Ÿƒ

    cd oci-security-health-check-standard-260105
    chmod +x standard.sh
    ./standard.sh
    

    Step 3 – Inspect the Findings ๐Ÿ”Ž

    Within the directory that the script is run from a folder is created that stores the output of the assessment:

    In my case this was brendankgriffin_20240712102613_standard. This directory contained the following files:

    To view these, I transferred them to my local machine using the download option within the Cloud Console.

    The tool provides instructions on how to download a Zipped copy of the assessment output – this is presented when the assessment tool finishes.

    I could then open the Zip file to review the findings, the first file I opened was standard_cis_html_summary_report.html, which contains a summary of the findings of the assessment.

    It didn’t take too much scrolling to start to see some red! โ›”๏ธ

    Clicking into the identifier of a finding (e.g. 6.2) provides additional background and context, which is useful for understanding the finding in greater detail and helping with remediation planning.

    Each finding includes a link to the respective CSV file, where you can get additional details on the affected resources/configurations – below you can see a list of the resources that I naughtily created in the root compartment ๐Ÿคฆโ€โ™‚๏ธ.

    My recommendation would be to run the Security Assesment regularly (e.g. monthly), to proactively identify and resolve any security issues.

    That’s all for now ๐Ÿ‘‹.

  • Documenting an OCI tenant using ShowOCI ๐Ÿ“œ

    I was speaking to a customer recently who wanted to document the resources within their OCI tenancy. OCI Tenancy Explorer provides details of the services deployed within an OCI tenant, however there isn’t any way to export the information that this presents – ShowOCI to the rescue!

    ShowOCI is a reporting tool which uses the Python SDK to extract a list of resources from a tenant. Output can be printer friendly, CSV files or JSON file – this is an ideal way to document the resources deployed and configuration within an OCI tenancy.

    This could potentially also be used as a low-tech way to do drift-detection, e.g. take a baseline, and compare over time to detect any drift.

    ShowOCI can be executed directly from within the Cloud Shell making it simple and quick to run ๐Ÿƒ.

    To execute ShowOCI from within a Cloud Shell (using an account with administrative permissions to the tenant), run the following commands (taken from here):

    Step 1 – Clone from OCI Python SDK Repo and Create symbolink link

    git clone https://github.com/oracle/oci-python-sdk
    ln -s oci-python-sdk/examples/showoci .
    

    Step 2 – Change Dir to ShowOCI

    cd showoci
    

    Step 3 – Run ShowOCI: Outputting all resources to CSV files prefixed with “MyTenant”

    python3 showoci.py -a -csv MyTenant
    

    There are numerous other options for running ShowOCI, for example you can get it to only include specific resources types such as compute, some of which are demonstrated here all options are presented when running ShowOCI.py without any parameters.

    After the script has run, I had a number of CSV files with the ShowOCI directory that contain details of my tenant.

    MyTenant_all_resources.csv contains high-level details of all resources within the tenant analysed (not all columns are shown):

    There is also a separate CSV file for each type of resource that provides further details, below is an excerpt from MyTenant_compute.csv which shows all of my compute instances (not all columns are shown).

    Happy tenant reporting!

  • Calling the Oracle Generative AI Service with Python ๐Ÿ

    Oracle recently enabled the Generative AI Service within the Frankfurt region in EMEA, I’ve been playing around with this using the Chat Playground which provides an easy way to experiment with the GenAI capabilities without writing any code. It’s also possible to tweak various parameters such as output tokens and temperature, which is fantastic for quick and dirty experimentation within a browser ๐Ÿงช.

    In the example below, I used the service to summarise a Blog post into bullet points:

    One super-useful feature included within the playground is the ability to generate code (Python or Java) to call the service, as a wannabee-coder anything that saves me time and brain power is always welcome ๐Ÿง .

    If you click the View Code button, you can view the code that was auto-generated based on the request that was created using the playground.

    I took this code and ran it within my test tenant however I ran into a small issue parsing the output, here is an example of the output that is written to the console by the script:

    I wanted to simply output the text generated by the request to the GenAI service rather than all of the other information returned (such as the input request and headers).

    To do this I ran the following commands to convert the output from JSON to a Python dictionary and then printed out the output message (which has an index of 1 – the original request has an index of 0). I placed this at the bottom of the auto-generated script

    # Convert JSON output to a dictionary
    data = chat_response.__dict__["data"]
    output = json.loads(str(data))
    
    # Print the output
    print("----------------")
    print("Summary Returned")
    print("----------------")
    print(output["chat_response"]["chat_history"][1]["message"])
    

    Here is the script in action:

  • Detecting faces and obscuring them with OCI Vision ๐Ÿซฃ

    I’ve previously written about how to use OCI Vision to perform image classification and object/text detection:

    For my next challenge, I wanted to use the face detection capabilities within OCI Vision, however rather than simply drawing a bounding box on the faces detected within an image (as demonstrated below) I wanted to obscure/hide any faces detected within an image – which would be useful for privacy reasons.

    I put together a script using Python with OpenCV and NumPy to achieve this, which does the following:

    1. Converts a local image on my machine to Base64 (imagepath variable)
    2. Submits this to the OCI Vision (face detection) API
    3. Returns details of all faces detected within an image
    4. Uses OpenCV and NumPy to take the normalized_vertices of the faces detected within the image (taken from the response) and obscures the faces
    5. Saves the image with the obscured faces (using the imagewritepath variable)

    Here is an example output image, with the faces obscured (the colour used can be changed).

    The script itself can be found below and on GitHub.

    To run this youโ€™ll need to update the imagepath and imagewritepath variables, youโ€™ll also need to include your Compartment ID within compartment_id (within the Detect faces section).

    import base64
    import oci
    import cv2
    import numpy as np
    
    imagepath = "/Users/User/Downloads/Faces.png" # path of the image to analyse
    imagewritepath = "/Users/User/Downloads/FacesHidden.png" # image to create with faces(s) hidden
     
    def get_base64_encoded_image(image_path): # encode image to Base64
        with open(image_path, "rb") as img_file:
            return base64.b64encode(img_file.read()).decode('utf-8')
    
    image = get_base64_encoded_image(imagepath)
    
    # Authenticate to OCI
    config = oci.config.from_file()
    ai_vision_client = oci.ai_vision.AIServiceVisionClient(config)
    
    # Detect faces
    analyze_image = ai_vision_client.analyze_image(
        analyze_image_details=oci.ai_vision.models.AnalyzeImageDetails(
            features=[
                oci.ai_vision.models.ImageObjectDetectionFeature(
                    max_results=10,feature_type="FACE_DETECTION")],
            image=oci.ai_vision.models.InlineImageDetails(
                source="INLINE",
                data = image),
            compartment_id="ENTER COMPARTMENT ID"))
    
    analysis = analyze_image.data
    Faces = analysis.detected_faces
    print("-Analysis complete, detected: " + str((len(Faces))) + " faces")
    
    # Used by the for loop below to change the logic of reading the image, if greater than 1 face is processed we must update the updated image rather than the original
    FaceNumber = 1
    
    # Loop through each face detected, remove and save to a new image
    for Face in Faces:
        print("-Processing face number " + str(FaceNumber))
        if FaceNumber == 1:
            # Read the image
            img = cv2.imread(imagepath)
        else:
             # Read the updated image (required if >1 faces are detected)
            img = cv2.imread(imagewritepath)   
    
        # Define the polygon vertices using the first object detected in the image
        vertices = np.array([((Face.bounding_polygon.normalized_vertices[0].x), (Face.bounding_polygon.normalized_vertices[0].y)), ((Face.bounding_polygon.normalized_vertices[1].x), (Face.bounding_polygon.normalized_vertices[1].y)),
                         ((Face.bounding_polygon.normalized_vertices[2].x), (Face.bounding_polygon.normalized_vertices[2].y)),((Face.bounding_polygon.normalized_vertices[3].x), (Face.bounding_polygon.normalized_vertices[3].y))])
    
        # Convert the normalized vertices to pixel coordinates
        height, width = img.shape[:2]
        pixels = np.array([(int(vertex[0] * width), int(vertex[1] * height)) for vertex in vertices])
    
        # Fill the face with a solid colour
        cv2.fillPoly(img, [pixels], [255,255,255])
    
        # Save the image
        cv2.imwrite(filename=imagewritepath,img=img)
    
        # Increment the face count by 1
        FaceNumber += 1
      
    print("-Finished!")
    

    The script provides some basic logging messages to report progress:

  • Creating a Dataset in OCI Data Labeling fails with “Content-Type Validation failed” error

    I ran into an issue recently with OCI Data Labelling, Generate records was failing with the following error: Content-Type Validation Failed โŒ.

    I was using data labeling to label some images that I was going to use to train a custom AI vision classification model.

    In my specific case, the dataset comprised images stored in an OCI Object Storage bucket. I had uploaded the images to the bucket using the OCI CLI, specifically the following command which uploaded all files within a specific directory on my local machine to a named bucket:

    oci os object bulk-upload --bucket-name Pneumonia-Images --src-dir "/Users/bkgriffi/OneDrive/Development/train/PNEUMONIA"
    

    A helpful colleague advised me to manually set the content type at upload to image/jpeg, below is the updated command that I run that uploads the images and sets the correct content type.

    oci os object bulk-upload --bucket-name Pneumonia-Images --src-dir "/Users/bkgriffi/OneDrive/Development/train/PNEUMONIA" --overwrite --content-type image/jpeg
    

    Once I’d done this, records generated successfully โœ….

  • Copying files between Azure Blob Storage and OCI Object Storage using Rclone

    For an upcoming AI demo, I need to demonstrate moving some images from Azure Blob Storage to OCI Object Storage so that these could be trained using a custom model with OCI AI Vision. I was looking for a nice demo-friendly way to automate this and stumbled across Rclone.

    Rclone is a command-line program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors’ web storage interfaces. Over 70 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.

    Taken from: https://rclone.org/

    Within a few minutes I was able to configure Rclone to connect to Azure Blob Storage and OCI Object Storage and was copying files between the two with a single command ๐Ÿ˜ฎ.

    To get started I installed Rclone using the instructions here – for macOS, this was as simple as running:

    sudo -v ; curl https://rclone.org/install.sh | sudo bash
    

    Once I’d installed, I typed rclone config and then n to create a new host, this walked me through the process of creating a connection – including selecting the type of storage (there are 55 to choose from, including OCI and Azure), the storage account to connect to and how to authenticate, I did this for OCI and then repeated the process for Azure.

    In terms of authentication, I selected option 2 for OCI, which uses my OCI config file within ~/.oci/config, more details on how to create a config file can be found here.

    For Azure I opted to use the access key to authenticate to the storage account:

    Once I’d created the connections, I could inspect the configuration file that Rclone had created – the location of which can be found by running rclone config file.

    Below is the contents of the configuration file that I have ๐Ÿ“„.

    I could view the contents of each of the storage accounts (OCI and Azure are the names that I gave the respective configurations, which need to be used with the command):

    Contents of OCI storage account

    Contents of Azure storage account

    Finally, I ran the command below to copy the content of the images directory (container) within Azure to the Images bucket within OCI.

    rclone copy Azure:images OCI:Images --progress
    

    Here’s a short video of it in action

  • Creating an AI Vision Model in OCI that can detect brain tumours ๐Ÿง 

    Here’s a short walkthrough video of how to create an AI Vision model in OCI that can analyse a brain scan and detect brain tumours ๐Ÿ”Ž.

    The images I used to train the model can be found here – https://www.kaggle.com/datasets/navoneel/brain-mri-images-for-brain-tumor-detection/data

    The script I used to bulk label the images uploaded to the object storage bucket can be found here – https://github.com/oracle-samples/oci-data-science-ai-samples/tree/main/data_labeling_examples/bulk_labeling_python

  • Using PowerShell to upload files to OCI Object Storage ๐Ÿชฃ

    With my new focus on all things Oracle Cloud Infrastructure (OCI) I’ve not been giving PowerShell much love recently.

    I knew that PowerShell modules for OCI were available, however hadn’t had an excuse to use them until somebody asked me how they could use PowerShell with OCI Object Storage ๐Ÿชฃ.

    Fortunately, the OCI Modules for PowerShell are feature-rich and well documented ๐Ÿ’ช.

    To get started you can run the following to install all of the modules (as I did):

    Install-Module OCI.PSModules
    

    If you’d prefer to only install the specific modules that you need, this can be done by running the following – replacing ServiceName with the name of the service whose module you’d like to install, the ServiceName for each service can be found within the Cmdlet reference.

    Install-Module OCI.PSModules.<ServiceName>
    

    Before you use the PowerShell modules, you’ll need to ensure that you have an OCI configuration file (which is used for authentication), instructions on creating one can be found here.

    In my first example, I’m going to use the OCI PowerShell Module for Object Storage to upload a file to a storage bucket, prior to running this command I need to know the Namespace for Object Storage within my tenant as the Cmdlet requires this. This is listed within the details page for each storage bucket (as highlighted below):

    Once I had this, I ran the following to upload the file named DemoObject.rtf to the bucket named data, within the namespace I obtained above.

    Write-OCIObjectstorageObject -bucketname "data" -NamespaceName "lrdkvqz1i7f7" -ObjectName "DemoObject.rtf" -PutObjectBodyFromFile "/Users/bkgriffi/Downloads/DemoObject.rtf"
    

    One point to note is that I’m running this on a Mac, if you are running on Windows you’ll need to use the correct file path format.

    Once I’d ran the command I could see it uploaded within the OCI Console:

    In the more advanced example below, the script loops through a speific folder (set by the $Folder variable) and uploads all files within it to the data bucket.

    $Folder = "/Users/bkgriffi/OneDrive/Development/Folder"
    $Files = Get-ChildItem -Path $Folder
    
    Foreach ($File in $Files) {
        Write-OCIObjectstorageObject -bucketname "data" -NamespaceName "lrdkvqz1i7f7" `
        -ObjectName $File.Name -PutObjectBodyFromFile ($Folder + "/" + $File.Name)
    }
    

    If your configuration file isn’t in the default location, you will also need to specify
    -ConfigFile
    and the pass to the file within the command.

    A Full reference for the Cmdlet used (Write-OCIObjectstorageObject) can be found here.