Setup Prompt Evaluation API
Use LLUMO AI’s proprietary technology to evaluate LLM output and gain insights.
Getting Started with Prompt Evaluation API
This guide provides step-by-step instructions on setting up prompt evaluation API, that helps your evaluate your prompt output and gives you insights.
1. Open evaluation setup modal
The “Evaluation setup modal” provides your essential connection details: API endpoint, key, and sample request bodies. This ensures a smooth setup for compressing your prompts.
Now, you can see the setup modal for the evaluation. If you are not able to see the below modal, contact our support team.
If you close the modal window, you can access it again by clicking “Setup Now” CTA on the banner at the top of the page.
2. Access your API keys
The API keys equips you with the credentials needed to connect with Prompt Compression API. This unlocks the power to evaluate your prompts output and gives you hidden insight without any need of ground truth.
We suggest you to keep your API keys very safely. If you experience any misuse, contact our support team.
3. Send first request to Evaluation API
Once you have your request setup for a sample body, you can send a request to the Evaluation API to evaluate your own prompts output based on selected metrics.
All Done!
Congrats! You’ve set up your LLUMO’s Prompt Evaluation API and it has started evaluating your prompt outputs with each inference!
If you encounter any difficulties while setting up, please refer to the troubleshooting section of the guide or, contact our support team at connect@llumo.ai.
Was this page helpful?