Use LLUMO AI’s proprietary technology to evaluate output from Google Gemini models.
##Installing Necessary Libraries This cell installs or upgrades the necessary Python packages for the project:
The ! at the beginning allows running shell commands in Jupyter notebooks or Google Colab.
##Setting Up VertexAI This cell mounts your Google Drive to the Colab environment:
os
module for operating system operations and the drive module from google.colab
.drive.mount()
attaches your Google Drive to the ‘/content/drive’ directory in the Colab environment.force_remount=True
ensures that the drive is remounted even if it was previously mounted, which can help resolve connection issuesThen set up the Google Cloud credentials:
GOOGLE_APPLICATION_CREDENTIALS
environment variable to point to your Google Cloud service account key file.##Importing Libraries This cell imports all necessary Python modules:
os
: For operating system operations and environment variables.requests
: For making HTTP requests (will be used for Llumo API calls).json
: For JSON parsing and manipulation.logging
: For setting up logging in the script.getpass
: For securely inputting passwords or API keys.aiplatform
: The main module for interacting with Vertex AI.TextGenerationModel
: Specific class for text generation tasks in Vertex AI.##Setting up Basic Logging This cell sets up logging for the script:
logging.basicConfig()
configures the logging system with INFO level, meaning it will capture all info, warning, and error messages.logger = logging.getLogger(__name__)
creates a logger object. __name__
is a special Python variable that gets set to the module’s name when the module is executed.##Setting up Llumo API This cell securely handles the Llumo API key:
getpass()
to prompt for the Llumo API key without displaying it on the screen as it’s typed.evaluate_with_llumo
that takes a prompt
and output
as inputs.prompt
, output
, and predefined analytics type (“Clarity”).requests.post()
to send a POST request to the Llumo API.response.raise_for_status()
will raise an exception for HTTP errors.We use a try-except block to catch potential errors:
If an error occurs, we log it and return an empty dictionary with a failure indicator.
This function encapsulates the entire process of interacting with the Llumo API for text evaluation, including error handling and result processing.
##Getting Respone from VertexAI. ###Defining the prompt:
###Sending request:
We use the VertexAI client to send a request to the text-bison model.
The messages parameter follows the chat format:
A system message sets the AI’s role.
A user message contains our prompt.
###Displaying results:
Use LLUMO AI’s proprietary technology to evaluate output from Google Gemini models.
##Installing Necessary Libraries This cell installs or upgrades the necessary Python packages for the project:
The ! at the beginning allows running shell commands in Jupyter notebooks or Google Colab.
##Setting Up VertexAI This cell mounts your Google Drive to the Colab environment:
os
module for operating system operations and the drive module from google.colab
.drive.mount()
attaches your Google Drive to the ‘/content/drive’ directory in the Colab environment.force_remount=True
ensures that the drive is remounted even if it was previously mounted, which can help resolve connection issuesThen set up the Google Cloud credentials:
GOOGLE_APPLICATION_CREDENTIALS
environment variable to point to your Google Cloud service account key file.##Importing Libraries This cell imports all necessary Python modules:
os
: For operating system operations and environment variables.requests
: For making HTTP requests (will be used for Llumo API calls).json
: For JSON parsing and manipulation.logging
: For setting up logging in the script.getpass
: For securely inputting passwords or API keys.aiplatform
: The main module for interacting with Vertex AI.TextGenerationModel
: Specific class for text generation tasks in Vertex AI.##Setting up Basic Logging This cell sets up logging for the script:
logging.basicConfig()
configures the logging system with INFO level, meaning it will capture all info, warning, and error messages.logger = logging.getLogger(__name__)
creates a logger object. __name__
is a special Python variable that gets set to the module’s name when the module is executed.##Setting up Llumo API This cell securely handles the Llumo API key:
getpass()
to prompt for the Llumo API key without displaying it on the screen as it’s typed.evaluate_with_llumo
that takes a prompt
and output
as inputs.prompt
, output
, and predefined analytics type (“Clarity”).requests.post()
to send a POST request to the Llumo API.response.raise_for_status()
will raise an exception for HTTP errors.We use a try-except block to catch potential errors:
If an error occurs, we log it and return an empty dictionary with a failure indicator.
This function encapsulates the entire process of interacting with the Llumo API for text evaluation, including error handling and result processing.
##Getting Respone from VertexAI. ###Defining the prompt:
###Sending request:
We use the VertexAI client to send a request to the text-bison model.
The messages parameter follows the chat format:
A system message sets the AI’s role.
A user message contains our prompt.
###Displaying results: