# Factual Correctness

<figure><img src="https://4279757243-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FTbBxR0Ob7RUmbvHZkQi2%2Fuploads%2FJKCc5IXL7imHDB5Oslty%2FFactualCorrectness.gif?alt=media&#x26;token=f52af6e2-cb6b-4be8-b98e-9b6c5037f729" alt=""><figcaption></figcaption></figure>

### Factual Correctness Metric

**Factual Correctness** measures the factual accuracy of a generated response compared to a reference response. It evaluates how well the generated content aligns with the factual information from the reference.

#### Key Features

* **Score Range**: The factual correctness score ranges from **0 to 1**, where higher values indicate better performance.
* **Claim Breakdown**: The metric breaks down both the generated response and reference into claims using a language model (LLM).
* **Natural Language Inference**: It employs natural language inference to assess the factual overlap between the generated response and the reference.

$$
\text{True Positive (TP)} = \text{Number of claims in response that are present in reference}
$$

$$
\text{False Positive (FP)} = \text{Number of claims in response that are not present in reference}
$$

$$
\text{False Negative (FN)} = \text{Number of claims in reference that are not present in response}
$$

#### Alignment Measurement

The factual overlap is quantified using the following metrics:

* **Precision**: The ratio of true positives (TP) to the sum of true positives and false positives (FP).

$$
\text{Precision} = {TP \over (TP + FP)}
$$

* **Recall**: The ratio of true positives to the sum of true positives and false negatives (FN).

$$
\text{Recall} = {TP \over (TP + FN)}
$$

* **F1 Score**: The harmonic mean of precision and recall, providing a single score to balance both metrics.

$$
\text{F1 Score} = {2 \times \text{Precision} \times \text{Recall} \over (\text{Precision} + \text{Recall})}
$$

#### Mode Parameter

The accuracy of the alignment can be controlled using the **mode** parameter, allowing for adjustments based on the specific requirements of the evaluation.

#### Summary

The Factual Correctness metric is essential for assessing the reliability and accuracy of generated responses, ensuring alignment with factual information.

### Example Code: Factual Correctness Evaluation

This example demonstrates how to compute the **Factual Correctness** metric using the `FactualCorrectnessEvaluator` with the OpenAI language model.

```python
import logging
import sys
from dotenv import find_dotenv, load_dotenv
from dynamiq.evaluations.metrics import FactualCorrectnessEvaluator
from dynamiq.nodes.llms import OpenAI

# Load environment variables for the OpenAI API
load_dotenv(find_dotenv())

# Configure logging level
logging.basicConfig(stream=sys.stdout, level=logging.INFO)

# Initialize the OpenAI language model
llm = OpenAI(model="gpt-4o-mini")

# Sample data
answers = [
    (
        "Albert Einstein was a German theoretical physicist. "
        "He developed the theory of relativity and contributed "
        "to quantum mechanics."
    ),
    ("The Eiffel Tower is located in Berlin, Germany. " "It was constructed in 1889."),
]
contexts = [
    ("Albert Einstein was a German-born theoretical physicist. " "He developed the theory of relativity."),
    ("The Eiffel Tower is located in Paris, France. " "It was constructed in 1887 and opened in 1889."),
]

# Initialize evaluator and evaluate
evaluator = FactualCorrectnessEvaluator(llm=llm)
correctness_scores = evaluator.run(
    answers=answers, 
    contexts=contexts, 
    verbose=True  # Set to False to disable verbose logging
)

# Print the results
for idx, score in enumerate(correctness_scores):
    print(f"Answer: {answers[idx]}")
    print(f"Factual Correctness Score: {score}")
    print("-" * 50)

print("Factual Correctness Scores:")
print(correctness_scores)
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.getdynamiq.ai/evaluations/metrics/predefined-metrics/factual-correctness.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
