Evaluation Runs
Last updated
Last updated
Now that we have our workflows and metrics set up, it's time to create an evaluation run. This will allow us to assess the performance of our workflows using the metrics we previously defined.
Navigate to Evaluations:
In the Dynamiq portal, go to the Evaluations section.
Create New Evaluation Run:
Click on the New Evaluation Run button to start setting up your evaluation.
Configure Evaluation Run:
Name: Enter a descriptive name for your evaluation run.
Dataset: Select the dataset you prepared earlier. Ensure you choose the correct version.
Add Workflows:
Click on Add Workflow.
Select the workflows you want to evaluate (e.g., "accurate-workflow" and "inaccurate-workflow").
Choose the appropriate workflow version.
Input Mappings:
Map the dataset fields to the workflow inputs. For example:
Context: Map to $.dataset.context
Question: Map to $.dataset.question
Add Metrics:
Click on Add Metric.
Select the metrics you want to use for evaluation (e.g., Factual Accuracy, Completeness).
Map the metric inputs to the appropriate fields:
Question: Map to $.dataset.question
Answer: Map to $.workflow.answer
Ground Truth: Map to $.dataset.groundTruthAnswer
Create Evaluation Run:
Once all configurations are set, click the Create button to initiate the evaluation run.
After setting up your evaluation run, you can quickly assess the performance of your workflows using the selected metrics. Here’s how to execute and review an evaluation run.
Initiate Evaluation Run:
After configuring your evaluation settings, click Create to start the evaluation job. The system will begin processing the workflows with the selected metrics.
Monitor Evaluation Status:
In the Evaluations section, you can see the status of your evaluation runs. The status will initially show as "Running" and will change to "Succeeded" once completed.
Review Results:
Once the evaluation is complete, you can review the answers and their corresponding metrics.
Evaluation Runs Overview: The main screen will list all evaluation runs, showing their names, statuses, and creators. Successful runs will be marked as "Succeeded."
Detailed Results: Click on an evaluation run to see detailed results. You will find:
Context and Question: The input data used for generating answers.
Ground Truth Answer: The correct answer for comparison.
Workflow Outputs: Answers generated by each workflow version.
Metrics Scores: Scores for each metric, such as Clarity and Coherence, Ethical Compliance, Language Quality, and Factual Accuracy.
By running and reviewing evaluation runs, you can effectively measure the quality of your workflows. This process provides valuable insights into how well your workflows perform and where improvements can be made, ensuring high-quality outputs from your AI systems.