Factual Correctness

Factual Correctness Metric
Factual Correctness measures the factual accuracy of a generated response compared to a reference response. It evaluates how well the generated content aligns with the factual information from the reference.
Key Features
Score Range: The factual correctness score ranges from 0 to 1, where higher values indicate better performance.
Claim Breakdown: The metric breaks down both the generated response and reference into claims using a language model (LLM).
Natural Language Inference: It employs natural language inference to assess the factual overlap between the generated response and the reference.
Alignment Measurement
The factual overlap is quantified using the following metrics:
Precision: The ratio of true positives (TP) to the sum of true positives and false positives (FP).
Recall: The ratio of true positives to the sum of true positives and false negatives (FN).
F1 Score: The harmonic mean of precision and recall, providing a single score to balance both metrics.
Mode Parameter
The accuracy of the alignment can be controlled using the mode parameter, allowing for adjustments based on the specific requirements of the evaluation.
Summary
The Factual Correctness metric is essential for assessing the reliability and accuracy of generated responses, ensuring alignment with factual information.
Example Code: Factual Correctness Evaluation
This example demonstrates how to compute the Factual Correctness metric using the FactualCorrectnessEvaluator
with the OpenAI language model.
import logging
import sys
from dotenv import find_dotenv, load_dotenv
from dynamiq.evaluations.metrics import FactualCorrectnessEvaluator
from dynamiq.nodes.llms import OpenAI
# Load environment variables for the OpenAI API
load_dotenv(find_dotenv())
# Configure logging level
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
# Initialize the OpenAI language model
llm = OpenAI(model="gpt-4o-mini")
# Sample data
answers = [
(
"Albert Einstein was a German theoretical physicist. "
"He developed the theory of relativity and contributed "
"to quantum mechanics."
),
("The Eiffel Tower is located in Berlin, Germany. " "It was constructed in 1889."),
]
contexts = [
("Albert Einstein was a German-born theoretical physicist. " "He developed the theory of relativity."),
("The Eiffel Tower is located in Paris, France. " "It was constructed in 1887 and opened in 1889."),
]
# Initialize evaluator and evaluate
evaluator = FactualCorrectnessEvaluator(llm=llm)
correctness_scores = evaluator.run(
answers=answers,
contexts=contexts,
verbose=True # Set to False to disable verbose logging
)
# Print the results
for idx, score in enumerate(correctness_scores):
print(f"Answer: {answers[idx]}")
print(f"Factual Correctness Score: {score}")
print("-" * 50)
print("Factual Correctness Scores:")
print(correctness_scores)
Last updated