contentsâ
How do you evaluate the quality of an LLM-powered system, like a chatbot or AI agent?Â
Traditional machine learning metrics don't apply to the generative LLM outputs. When an LLM summarizes a text or makes a conversation, it's tough to quantify if the result is "good" or "bad." Humans can judge these things, but it's not scalable to manually score every response.
One way to address this evaluation problem is to use an LLM to evaluate the outputs of your AI system, a practice nicknamed "LLM-as-a-judge."Â
This tutorial shows how to create, tune, and use such LLM judges. We'll make a toy dataset and assess correctness and verbosity. You can apply the same workflow for other criteria.
We will use the open-source Evidently Python library to run evaluations.Â
Code example: to follow along, run this example in Jupyter notebook or Colab. There is also a step-by-step docs guide.
The goal of the tutorial is to demonstrate how LLM judges work â and show why you also need to evaluate the judge itself!
If you are looking for an introduction to the topic, check this LLM-as-a-judge guide.
Request free access to our on-demand webinar "How to use LLM-as-a-judge to evaluate LLM systems." You will learn what LLM evals are, how to use LLM judges, and what makes a good evaluation prompt.
Get access to the webinar â¶
Evals, or evaluations, help measure your LLMâs performance. Is it accurate? Is it consistent? Does it behave like you expect?
Evals are essential during development (comparing models or prompts to choose the best one) and in production (monitoring quality live). And whenever you make a change, like tweaking a prompt, youâll need to run regression tests to make sure the response quality hasnât dropped in areas that were working fine before.
Depending on the scenario, your evals can take different forms:
The first two are typically used for offline evaluations, where you have a âgoldenâ example or can compare responses side by side. In production, evaluations are usually open-ended.
The methods vary as well. If youâve got the time and resources, itâs hard to beat human evaluation. For constrained tasks like âuser intent detection,â traditional ML metrics still work. But for generative tasks, it gets complicated. Yes, you can check things like semantic similarity or scan for specific words â but itâs rarely enough.
Thatâs why the LLM judge method is gaining traction â and for good reason. It works where traditional metrics donât quite fit, both for offline and online evals.Â
LLM-as-a-Judge is an approach where you use an LLM to evaluate or "judge" the quality of outputs from AI-powered applications.Â
Say, if you have a chatbot, an external LLM can be asked to review its responses, assigning a label or score similar to what a human evaluator might do.Â
Essentially, the LLM acts like a classifier, assessing outputs based on specific criteria or guidelines. For example:
At first, it might seem odd to use an LLM to evaluate its âownâ outputs. If the LLM is the one generating the answers, why would it be any better at judging them?Â
The key difference is that classifying content is simpler than generating it. When generating responses, an LLM considers many variables, integrates complex context, and follows detailed prompts. Itâs a multi-step challenge. Judging responses, like assessing tone or format, is a more straightforward task. If formulated well, LLMs can handle it quite reliably.
How exactly does it work?
It would be great to say that creating an LLM judge is as simple as writing a prompt or picking a metric, but there's a bit more to that.Â
It starts with criteria and an evaluation scenario. LLM judges are not like traditional metrics like precision, NDGC, or Hit Rate, which are deterministic and give the same output for the same input. LLM judges work more like human evaluators who label data.Â
You need to define the clear grading criteria for your use case, just like you'd give instructions to a person! For LLM, you do it in a prompt.
Starting with simpler, binary tasks like grading inputs as "correct/incorrect" or "relevant/irrelevant" is often a good idea. Breaking things down this way helps keep the results consistent and easier to verify â not just for the LLM, but for anyone else checking the output.
Because the next step in creating the judge is toâŠ
Create an evaluation dataset. An LLM judge is a mini-machine learning project. It requires its own evals!
So, you must first prepare a few example inputs and grade them the way you want LLM to do it later. These labels will act as your ground truth to help you assess how well the LLM is judging things. And as you manually label the data, it forces you to really think through what you want the LLM to catch, which helps refine your criteria even more.
You can pull examples from your own experiments or production data or create synthetic test cases. This dataset doesnât have to be huge, but it should include some challenging examples â like tricky edge cases where your criteria might need a little tweaking.Â
Craft and refine the evaluation prompt. Once you know what you want to catch, you need an evaluation prompt. Clarity is key. If the prompt is too vague, the results may be inaccurate.Â
For example, if you want the LLM to classify content as "toxic" or "not toxic," you should describe specific behaviors to detect or add examples.
While there are templates for LLM judges in different libraries (including ours), they may not align with your definitions. You must customize â or at least review â the evaluation prompts you use. After all, the real strength of LLM judges is that you can tailor them!
Once you craft your prompt, apply it to your evaluation dataset and compare results to your labels. If itâs not good enough, iterate to make it more aligned.Â
This LLM judge doesnât need to be perfect â just "good enough" for your needs. Humans arenât perfect either! The great thing about LLM judges is their speed and flexibility.
Letâs see it in practice.
In this tutorial, we create a simple Q&A dataset and use an LLM to evaluate responses for correctness and verbosity.
To follow along, you will need:Â
We will work with two types of evaluations:
For both cases, we will use binary judges: score each response as "correct/incorrect" or "verbose/concise" with an explanation of the decision.Â
Hereâs the steps we take:Â
Our focus will be on creating and tuning the LLM judges. Once you create an evaluator, you can integrate it into workflows like regression testing.
We recommend running this tutorial in Jupyter Notebook or Google Colab to visualize the results directly in the cell.
To start, install Evidently and run the necessary imports:
Complete code: follow along with an example notebook and docs guide.Â
Next, we need a dataset to work with. We'll create a toy example using customer support questions. Each question will have two responses: one is the "target response" (imagine these as approved answers), and the other is a "new response" (this could be from a different model or prompt).
We manually labeled the new responses as either correct or incorrect, adding comments to explain each decision. This labeled data will serve as the baseline for the LLM judge.
There are both "good" and "bad" examples. Here is the distribution of classes:
Now, letâs ask LLM to do the same! We will create a custom correctness evaluator using the Evidently library. Evidently provides evaluation templates and helps visualize and test the results.Â
We will use a binary classification template for an LLM judge. It classifies responses into two labels, asks for reasoning, and formats the results. We just need to fill in the grading criteria.
Here is how we create the judge:
The prompt is quite strict: we prefer to mark a correct answer as incorrect than to mistakenly approve an incorrect one. Itâs up to you!
Once the judge is configured, run it on the toy dataset:
When you apply this to a âresponseâ column, Evidently will process the inputs row by row, send them the LLM for evaluation and return a summary report.
More importantly, you can inspect where the LLM got things right or wrong by checking the raw outputs:Â
This will show a dataframe with added scores and explanations.
Note that your results will look different: LLMs are not deterministic.Â
Let's also quantify how well the evaluator performs! Treating this as a classification task, we can measure things like:
Recall is particularly relevant since our goal is to catch all discrepancies.
Want to understand these metrics better? Read about precision and recall.
To evaluate the LLMÂ judge quality, we identify our manual labels as ground truth and LLM-provided labels as predictions and generate a classification report with Evidently.
Here is what we get:
If you look at the Confusion Matrix, you will see one type of error each.
Overall, the results are quite good! You can also zoom in to see specific errors and try refining the prompt based on where the LLM struggled. With the manual labels already in place, this iteration becomes much easier.
You can also try to make results worse: when we experimented with a naive grading prompt ("ANSWER is correct when it is essentially the same as REFERENCE"), this led to only 60% accuracy and 37.5% recall. Specificity helps!
For your use case, you might adjust the focus of the prompt: for instance, emphasizing tone or the main idea instead of looking at every detail.
Next, letâs build a verbosity evaluator. This one checks whether responses are concise and to the point. This doesnât require a reference answer: the LLM evaluates each response based on its own merit. This is great for online evaluations.Â
Here is how we define the check:
Once we apply it to the same column with "new_response
", we get the summary:
You can take a look at individual scores with explanations:
Donât agree with the results? No problem! Use these labels as a starting point, correct where needed, and youâll get a golden dataset - just like the one we started with when evaluating correctness. From there, you can iterate on your verbosity judge.
The LLM judge itself is just one part of your evaluation framework. Once set up, you can integrate it into workflows, like testing LLM outputs after youâve changed a prompt or ongoing quality monitoring.Â
At Evidently AI, weâre building an AI observability platform that simplifies this entire process. With Evidently Cloud, you can automate and scale these evaluations without writing code, or use it to track the results of evals you run locally. Itâs a collaborative platform that makes it easy to monitor and assess the quality of AI systems.Â
Ready to give it a try? Sign up for free to see how Evidently can help you build and refine LLM evaluators for your use cases!