Answer Relevancy
The answer relevancy metric uses LLM-as-a-judge to measure the quality of your RAG pipeline's generator by evaluating how relevant the actual_output of your LLM application is compared to the provided input. deepeval's answer relevancy metric is a self-explaining LLM-Eval, meaning it outputs a reason for its metric score.
Required Arguments
To use the AnswerRelevancyMetric, you'll have to provide the following arguments when creating an LLMTestCase:
inputactual_output
Read the How Is It Calculated section below to learn how test case parameters are used for metric calculation.
Usage
The AnswerRelevancyMetric() can be used for end-to-end evaluation of text-based and multimodal test cases:
from deepeval import evaluate
from deepeval.metrics import AnswerRelevancyMetric
from deepeval.test_case import LLMTestCase
metric = AnswerRelevancyMetric(
threshold=0.7,
model="gpt-4.1",
include_reason=True
)
test_case = LLMTestCase(
input="What if these shoes don't fit?",
# Replace this with the output from your LLM app
actual_output="We offer a 30-day full refund at no extra cost."
)
# To run metric as a standalone
# metric.measure(test_case)
# print(metric.score, metric.reason)
evaluate(test_cases=[test_case], metrics=[metric])from deepeval import evaluate
from deepeval.metrics import AnswerRelevancyMetric
from deepeval.test_case import LLMTestCase, MLLMImage
metric = AnswerRelevancyMetric(
threshold=0.7,
model="gpt-4.1",
include_reason=True
)
test_case = LLMTestCase(
input=f"Tell me about this landmark in France: {MLLMImage(...)}",
# Replace this with the output from your LLM app
actual_output=f"This appears to be Eiffel Tower, which is a famous landmark in France"
)
# To run metric as a standalone
# metric.measure(test_case)
# print(metric.score, metric.reason)
evaluate(test_cases=[test_case], metrics=[metric])There are SEVEN optional parameters when creating an AnswerRelevancyMetric:
- [Optional]
threshold: a float representing the minimum passing threshold, defaulted to 0.5. - [Optional]
model: a string specifying which of OpenAI's GPT models to use, OR any custom LLM model of typeDeepEvalBaseLLM. Defaulted togpt-5.4. - [Optional]
include_reason: a boolean which when set toTrue, will include a reason for its evaluation score. Defaulted toTrue. - [Optional]
strict_mode: a boolean which when set toTrue, enforces a binary metric score: 1 for perfection, 0 otherwise. It also overrides the current threshold and sets it to 1. Defaulted toFalse. - [Optional]
async_mode: a boolean which when set toTrue, enables concurrent execution within themeasure()method. Defaulted toTrue. - [Optional]
verbose_mode: a boolean which when set toTrue, prints the intermediate steps used to calculate said metric to the console, as outlined in the How Is It Calculated section. Defaulted toFalse. - [Optional]
evaluation_template: a class of typeAnswerRelevancyTemplate, which allows you to override the default prompts used to compute theAnswerRelevancyMetricscore. Defaulted todeepeval'sAnswerRelevancyTemplate.
Within components
You can also run the AnswerRelevancyMetric within nested components for component-level evaluation.
from deepeval.dataset import Golden
from deepeval.tracing import observe, update_current_span
...
@observe(metrics=[metric])
def inner_component():
# Set test case at runtime
test_case = LLMTestCase(input="...", actual_output="...")
update_current_span(test_case=test_case)
return
@observe
def llm_app(input: str):
# Component can be anything from an LLM call, retrieval, agent, tool use, etc.
inner_component()
return
evaluate(observed_callback=llm_app, goldens=[Golden(input="Hi!")])As a standalone
You can also run the AnswerRelevancyMetric on a single test case as a standalone, one-off execution.
...
metric.measure(test_case)
print(metric.score, metric.reason)How Is It Calculated?
The AnswerRelevancyMetric score is calculated according to the following equation:
The AnswerRelevancyMetric first uses an LLM to extract all statements made in the actual_output, before using the same LLM to classify whether each statement is relevant to the input.
Customize Your Template
Since deepeval's AnswerRelevancyMetric is evaluated by LLM-as-a-judge, you can likely improve your metric accuracy by overriding deepeval's default prompt templates. This is especially helpful if:
- You're using a custom evaluation LLM, especially for smaller models that have weaker instruction following capabilities.
- You want to customize the examples used in the default
AnswerRelevancyTemplateto better align with your expectations.
Here's a quick example of how you can override the statement generation step of the AnswerRelevancyMetric algorithm:
from deepeval.metrics import AnswerRelevancyMetric
from deepeval.metrics.answer_relevancy import AnswerRelevancyTemplate
# Define custom template
class CustomTemplate(AnswerRelevancyTemplate):
@staticmethod
def generate_statements(actual_output: str):
return f"""Given the text, breakdown and generate a list of statements presented.
Example:
Our new laptop model features a high-resolution Retina display for crystal-clear visuals.
{{
"statements": [
"The new laptop model has a high-resolution Retina display."
]
}}
===== END OF EXAMPLE ======
Text:
{actual_output}
JSON:
"""
# Inject custom template to metric
metric = AnswerRelevancyMetric(evaluation_template=CustomTemplate)
metric.measure(...)