##Installing Necessary Libraries This cell installs or upgrades the necessary Python packages for the project:

  • google-cloud-aiplatform: The Google Cloud AI Platform SDK, used for interacting with Vertex AI.
  • requests: A popular HTTP library for making API calls (will be used for the Llumo API).
  • nltk: Natural Language Toolkit, a leading platform for building Python programs to work with human language data.
  • beautifulsoup4: A library for pulling data out of HTML and XML files, useful for web scraping.

The ! at the beginning allows running shell commands in Jupyter notebooks or Google Colab.

!pip install --upgrade google-cloud-aiplatform requests

##Setting Up VertexAI This cell mounts your Google Drive to the Colab environment:

  • It imports the os module for operating system operations and the drive module from google.colab.
  • drive.mount() attaches your Google Drive to the ‘/content/drive’ directory in the Colab environment.
  • force_remount=True ensures that the drive is remounted even if it was previously mounted, which can help resolve connection issues
import os
from google.colab import drive
drive.mount('/content/drive',force_remount=True)

Then set up the Google Cloud credentials:

  • It sets the GOOGLE_APPLICATION_CREDENTIALS environment variable to point to your Google Cloud service account key file.
  • The file path suggests that your credentials JSON file is stored in your Google Drive under the ‘MyDrive/vertex/’ directory.
  • This step is crucial for authenticating your script with Google Cloud services.
os.environ['GOOGLE_APPLICATION_CREDENTIALS']='/content/drive/MyDrive/vertex/googleCred.json'

##Importing Libraries This cell imports all necessary Python modules:

  • os: For operating system operations and environment variables.
  • requests: For making HTTP requests (will be used for Llumo API calls).
  • json: For JSON parsing and manipulation.
  • logging: For setting up logging in the script.
  • getpass: For securely inputting passwords or API keys.
  • aiplatform: The main module for interacting with Vertex AI.
  • TextGenerationModel: Specific class for text generation tasks in Vertex AI.
import os
import requests
import json
import logging
from getpass import getpass
from google.cloud import aiplatform
from vertexai.language_models import TextGenerationModel

##Setting up Basic Logging This cell sets up logging for the script:

  • logging.basicConfig() configures the logging system with INFO level, meaning it will capture all info, warning, and error messages.
  • logger = logging.getLogger(__name__) creates a logger object. __name__ is a special Python variable that gets set to the module’s name when the module is executed.
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

##Setting up Llumo API This cell securely handles the Llumo API key:

  • It uses getpass() to prompt for the Llumo API key without displaying it on the screen as it’s typed.
  • The API key is then stored as an environment variable named ‘LLUMO_API_KEY’.
  • This approach keeps the API key secure by not hardcoding it in the script.
llumo_api_key = getpass("Enter your Llumo API key: ")
os.environ['LLUMO_API_KEY'] = llumo_api_key
del llumo_api_key

Llumo Evaluation Function Documentation

Define Llumo Evaluation Function

Function Definition:

  • We define a function evaluate_with_llumo that takes a prompt and output as inputs.

API Setup:

  • We retrieve the Llumo API key from environment variables.
  • We set the API endpoint and prepare headers for the HTTP request.

Payload Preparation:

  • We create a payload dictionary with the prompt, output, and predefined analytics type (“Clarity”).

API Request:

  • We use requests.post() to send a POST request to the Llumo API.
  • response.raise_for_status() will raise an exception for HTTP errors.

Response Parsing:

  • We parse the JSON response and extract the evaluation data.
  • We print the API response for debugging purposes.

Error Handling:

  • We use a try-except block to catch potential errors:

    • JSON decoding errors
    • Request exceptions
  • If an error occurs, we log it and return an empty dictionary with a failure indicator.

Return Values:

  • The function returns a tuple containing:
    • Evaluation data (or empty dictionary if evaluation failed)
    • Success boolean

This function encapsulates the entire process of interacting with the Llumo API for text evaluation, including error handling and result processing.

def evaluate_with_llumo(prompt, output):
    LLUMO_API_KEY = os.getenv("LLUMO_API_KEY")
    LLUMO_ENDPOINT = "https://app.llumo.ai/api/create-eval-analytics"
    headers = {
        "Content-Type": "application/json",
        "Authorization": f"Bearer {LLUMO_API_KEY}"
    }
    payload = {
        "prompt": prompt,
        "input": {},
        "output": output,
        "analytics": ["Clarity"]
    }

    try:
        response = requests.post(LLUMO_ENDPOINT, json=payload, headers=headers)

        try:
            result = response.json()
            print("API Response:", result)
            return result.get('data', {}), True
        except json.JSONDecodeError as json_error:
            logger.error(f"Error decoding JSON: {str(json_error)}")
            return {}, False

    except requests.RequestException as e:
       logger.error(f"Error evaluating with Llumo: {str(e)}")
       return {}, False

##Getting Respone from VertexAI. ###Defining the prompt:

  • We create a detailed prompt about photosynthesis. This serves as our example text for compression.

###Sending request:

  • We use the VertexAI client to send a request to the text-bison model.

  • The messages parameter follows the chat format:

  • A system message sets the AI’s role.

  • A user message contains our prompt.

###Displaying results:

  • We print the AI’s response to the prompt.
  • Vertex AI doesn’t provide any token usage. So, it’s not possible to print token usage for checking.
prompt = """
Quite a long time ago, there lived two talking birds with their parents. One day when their parents were away, a villager who generally had an eye on those special birds caught them and took them away. One of the birds got away from the trap and looked for his parents. The bird, at last, arrived at a hermitage where it was welcomed and had some food. He lived joyfully.
An explorer once ran over the other bird that the villager caught. He talked very rudely to him. He was shocked to see a talking bird but at the same time was angry with his way of behaving. The explorer visited the hermitage and recognised a similar talking bird, yet this bird talked respectfully and welcomed him to remain there.
Moral of the Story
Staying in a good company gives us a good way of behaving. Bad company influences our way of behaving negatively.

What is Moral of the story
"""


model = TextGenerationModel.from_pretrained("text-bison")
parameters = {
    "max_output_tokens": 1024,
    "temperature": 0.2,
    "top_p": 0.8,
    "top_k": 40
}
response = model.predict(prompt, **parameters)
print(response.text)

Evaluating Responses with Llumo

Evaluate the OpenAI Response:

  • We evaluate the response generated by the OpenAI API using the Llumo evaluation function.
  • We call the evaluate_with_llumo function with the example prompt and the openai_output.
  • We check if the evaluation was successful:
    • If successful, we print the Llumo evaluation results.
    • If the evaluation fails, we print an error message and indicate that the original prompt can be used if evaluation fails.
print("Evaluating Responses:")
prompt = "Describe the process of photosynthesis."
response = "Photosynthesis is a process used by plants and other organisms to convert light energy, usually from the sun, into chemical energy that can later be released to fuel the organisms' activities. This process is essential for the growth of plants. Photosynthesis occurs in the chloroplasts of cells, where chlorophyll absorbs light energy and converts it into chemical energy through a series of reactions. The overall equation for photosynthesis can be written as: 6CO2 + 6H2O + light energy -> C6H12O6 + 6O2. This process is vital for the production of oxygen, which is necessary for the survival of most living organisms."
eval_result, success = evaluate_with_llumo(prompt, response)

if success:
    print("Llumo Evaluation Results:")
    for analytic, score in eval_result.items():
        print(f"{analytic}: {score}")
else:
    print("Evaluation with Llumo failed.")