Evaluations
Last updated
Last updated
In the rapidly evolving landscape of AI, evaluating the quality of answers produced by language models is crucial. Effective evaluation ensures that the models meet desired standards and provide reliable outputs. By implementing LLM-as-a-judge metrics, we can automate the assessment process, making it both efficient and consistent. This guide will walk you through creating these metrics and running evaluations seamlessly using Dynamiq.
To create a metric, navigate to the Evaluations section in your Dynamiq portal. Follow these steps:
Go to Evaluations -> Metrics -> Create a Metric: This path will lead you to the interface where you can define new metrics.
Explore Existing Templates: Dynamiq offers a variety of metric templates such as Factual Accuracy, Completeness, Clarity and Coherence, Relevance, Language Quality, Ethical Compliance, and Originality and Creativity. These templates can serve as inspiration for crafting your prompts.
Add a New Metric: Click on the Add new metric button. You will see a form where you can specify the details of your metric.
Name: Enter a descriptive name for your metric.
Instructions: Choose from the available templates or create a custom prompt.
LLM: Select the LLM provider and model that will be used for metric calculation. Dynamiq supports seamless integration with providers like OpenAI.
Connection: Establish a connection to the selected LLM model.
Temperature: Adjust the temperature setting to control the randomness of the model's output.
Create: Once all fields are filled, click the Create button to finalize your metric.
A well-prepared dataset is crucial for assessing the performance of AI workflows and ensuring that evaluation metrics capture the nuances of different answers. Here's how to create and manage your evaluation dataset in Dynamiq.
Navigate to Datasets: In the Dynamiq portal, go to the Evaluations section and select Datasets. This is where you can manage your datasets.
Add New Dataset: Click on the Add new dataset button to start creating a new dataset.
Name: Enter a descriptive name for your dataset.
Description: Provide a brief description of the dataset's purpose and contents.
Upload from File: You can upload your dataset in JSON format. Click on the upload area or drag and drop your JSON file. If you need a reference, download the Sample JSON to see the required format.
JSON Structure: Your JSON file should include essential data such as input prompts and desired outputs. Here's an example structure:
Create: Once your file is uploaded, click the Create button to finalize your dataset.
After uploading, you can review your dataset entries:
Dataset Overview: View the dataset's version, creator, and last edited details.
Dataset Entries: Examine each entry's context, question, and ground truth answer to ensure accuracy and completeness.
Upload New Version: If updates are needed, you can upload a new version of your dataset.
By following these steps, you can create a comprehensive dataset that will enhance the evaluation process, ensuring your AI workflows are thoroughly tested and validated.
To demonstrate the effectiveness of LLM-as-a-judge metrics, we'll create two workflows: one that generates accurate answers and another that produces answers with mistakes. This will highlight how metrics can differentiate between high-quality and low-quality outputs.
Prompt 2: Inaccurate Assistant
Navigate to Workflows: In the Dynamiq portal, go to the Workflows section.
Create New Workflow: Click on the Create button to start a new workflow.
Configure Workflow:
Name: Give each workflow a descriptive name (e.g., "accurate-workflow" and "inaccurate-workflow").
Prompt: Use the templates provided above for each workflow.
LLM Selection: Choose the appropriate LLM provider and model for generating responses.
Deploy Workflows: Once configured, deploy the workflows to start generating answers based on the provided prompts.
By setting up these workflows, you can clearly see how LLM-as-a-judge metrics can distinguish between accurate and inaccurate responses, showcasing their power in evaluating AI-generated content.
Now that we have our workflows and metrics set up, it's time to create an evaluation run. This will allow us to assess the performance of our workflows using the metrics we previously defined.
Navigate to Evaluations: In the Dynamiq portal, go to the Evaluations section.
Create New Evaluation Run: Click on the New Evaluation Run button to start setting up your evaluation.
Configure Evaluation Run:
Name: Enter a descriptive name for your evaluation run.
Dataset: Select the dataset you prepared earlier. Ensure you choose the correct version.
Add Workflows:
Click on Add workflow.
Select the workflows you want to evaluate (e.g., "accurate-workflow" and "inaccurate-workflow").
Choose the appropriate workflow version.
Input Mappings:
Map the dataset fields to the workflow inputs. For example:
Context: Map to $.dataset.context
Question: Map to $.dataset.question
Add Metrics:
Click on Add metric.
Select the metrics you want to use for evaluation (e.g., FactualAccuracy, Completeness).
Map the metric inputs to the appropriate fields:
Question: Map to $.dataset.question
Answer: Map to $.workflow.answer
Ground Truth: Map to $.dataset.groundTruthAnswer
Create Evaluation Run: Once all configurations are set, click the Create button to initiate the evaluation run.
Once you've set up your evaluation run, you can quickly assess the performance of your workflows using the metrics. Here's how to execute and review an evaluation run.
Initiate Evaluation Run: After configuring your evaluation settings, click Create to start the evaluation job. The system will begin processing the workflows with the selected metrics.
Monitor Evaluation Status: In the Evaluations section, you can see the status of your evaluation runs. It will initially show as "Running" and change to "Succeeded" once completed.
Review Results: Once the evaluation is complete, you can review the answers and their corresponding metrics.
Evaluation Runs Overview: The main screen will list all evaluation runs, showing their names, statuses, and creators. Successful runs will be marked as "Succeeded."
Detailed Results: Click on an evaluation run to see detailed results. You'll find:
Context and Question: The input data used for generating answers.
Ground Truth Answer: The correct answer for comparison.
Workflow Outputs: Answers generated by each workflow version.
Metrics Scores: Scores for each metric, such as Clarity and Coherence, Ethical Compliance, Language Quality, and Factual Accuracy.
By running and reviewing evaluation runs, you can effectively measure the quality of your workflows. This process provides valuable insights into how well your workflows perform and where improvements can be made, ensuring high-quality outputs from your AI systems.