🎓 Free introductory course "LLM evaluations for AI product teams". Save your seat
ML Monitoring

Can you build a machine learning model to monitor another model?

Last updated:
November 27, 2024

Can you train a machine learning model to predict your model's mistakes?

Nothing stops you from trying. But chances are, you are better off without it.

We've seen this idea suggested more than once.

It sounds reasonable on the surface. Machine learning models make mistakes. Let us take these mistakes and train another model to predict the missteps of the first one! Sort of a "trust detector," based on learnings from how our model did in the past.

Second model to monitor the predictions of the first one.

By itself, learning from mistakes makes a lot of sense.

This exact approach lies at the base of the boosting technique in machine learning. It is implemented in many ensemble algorithms, such as gradient boosting over decision trees. Every next model is trained to correct the errors of the previous ones. The model composition performs better than one.

Gradient boosting

But would it help us train a separate, second model to predict whether the first model is correct?

The answer might disappoint.

Let's think through examples.

[fs-toc-omit]Want to learn more about ML monitoring?
Sign up for our Open-source ML observability course. Designed for data scientists and ML engineers. Yes, it's free!

Save my seat ⟶

Training the watchdog

Say, you have a demand forecasting model. You want to catch it when it's wrong.

You decide to train a new model on the first model mistakes. What would this mean, exactly?

It is a regression task where we predict a continuous variable. Once we know the actual sales volumes, we can calculate the model error. We could choose something like MAPE or RMSE. Then, we would train the model using the value of this metric as a target.

Taking error as a target for training

Or let's take a classification example: the probability of credit loan defaults.

Our loan prediction model is likely a probabilistic classification. Each customer gets a score from 0 to 100 on how likely they will default. At a certain cut-off threshold, we deny a loan.

In some time, we will know the truth. Some of our predictions would be false negatives: we gave loans to the people who still defaulted.

But, if we act on all predictions without review, we never learn about false positives. If we wrongly denied a loan, this feedback left with the customer.

We can still use the partial learnings we got. Maybe, take the predicted probabilities for the defaulted customers and then train a new model to predict similar errors?

Taking misclassification as a target for training

Will it work?

Yes, and no.

It can technically work. Meaning, you might be able to train a model that actually predicts something.

But if it does, this means you should just retrain the initial model instead!

Let us explain.

Why can machine learning models be wrong? Data quality aside, it is usually one of the two:

  1. There is not enough signal in the data the model trained on. Or not enough data. Overall, or for a specific segment where it fails. The model did not learn anything useful and now returns a weird response.
  2. Our model is not good enough. It is too simple to capture the signal from the data correctly. It does not know something it can potentially learn.

In the first case, model errors would have no pattern. So, any attempt to train the "watchdog" model would fail. There is nothing new to learn.

In the second case, you might be able to train a better model! A more complex one that better suits the data to capture all the patterns.

But if you can do so, why train the "watchdog"? Why not update the first model instead? It can learn from the same real-world feedback we got when applying it in the first place.

Add new data to training version 2 of the model

One model to rule them all

Chances are, it is not that our initial model was "bad." These might be the customers who changed or some real-world conditions that brought new patterns. Think pandemic affecting both sales and credit behavior. The same old data and concept drift we already talked about.

We can take the new data about sales and loan defaults and add it to our old training set.

We will not predict "errors." We will teach our models to predict the exact same things. How likely will a person default on a loan? What will the volume of sales be? But it will be a new, updated model that learned from its own mistakes.

That's it!

The "watchdog" model next to it will not add value.

It simply has no other data to learn from. Both models use the same feature set and have access to the same signal.

If a new model makes errors, the "watchdog" model will miss it as well.

Model will always trust the first one
Our second model trained on the same feature set will trust the first model.

One exception could be if we have no access to the original model and cannot retrain it directly. For example, it belongs to a 3-rd party or is fixed due to regulations.

We can indeed construct a second model if we have new data from the real-life application context and actual labels. It is, however, an artificial limitation. Doing so makes no sense if we are maintaining the original model ourselves.

What can we do instead?

The idea of a "watchdog" model did not work. What else can we do?

Let's start with why.

Our primary goal is to build trustworthy models that perform well in production. We want to minimize wrong predictions. Some of them might be costly to us.

Assuming we did all we could on the modeling side, we сan use other means to ensure that our models perform reliably.

First, build a regular monitoring process.

Yes, this approach does not directly address each error the model makes. But it builds up a way to maintain and improve model performance and thus minimize the errors at scale.

This includes detecting early signs of data and concept drift through monitoring changes in input distributions and predictions.

Data drift detected for 4 out of 6 features
An example of the data drift monitoring using evidently in the demand forecasting tutorial.

Second, consider coupling machine learning with good old rules.

If we analyze our model behavior in more detail, we can identify areas where it does not perform well. We can then limit the model application to those cases where we know the model has more chances to succeed.

In a detailed tutorial, we explored how to apply this idea in an employee attrition prediction task. We also considered adding a custom threshold for probabilistic classification to balance false positives and false negatives errors.

Choice between "few predictions, mostly right" and "finds x10 cases but is wrong half the time"

Third, we can add statistical checks on the model inputs. In the "watchdog" model, the idea was to judge if we can trust the model output. Instead, we can detect outliers in the input data.

The goal is to verify how different it is from what the model has trained on. If a specific input is "too different" from what the model has seen before, we can send it for a manual check, for example.

Side note. Thanks to one of our readers for sparking the conversation!

In regression problems, sometimes you can build a "watchdog" model. This happens when your original model optimizes the prediction error, taking into account its sign. If the second "watchdog" model is predicting an absolute error instead, it might get something more out of the dataset.

But here is a thing: if it works, this does not tell that the model is "wrong" or how to correct it. Instead, it is an indirect way to evaluate the uncertainty of data inputs. (Here is a whole paper that explores this in detail).

In practice, this returns us to the same alternative solution. Instead of training the second model, let's check if the input data belongs to the same distributions!

Summing up

We all want our machine learning models to perform well and know we can trust the model output.

While an optimistic idea to monitor your machine learning models with another supervised "watchdog" model has low chances to succeed, the intent itself has its merit. There are other ways to ensure the production quality of your model.

These include building up a thorough monitoring process, designing custom model application scenarios, detecting outliers, and more.

In the following posts, we will explore them in more detail.

[fs-toc-omit]Get started with AI observability
Try our open-source library with over 20 million downloads, or sign up to Evidently Cloud to run no-code checks and bring all the team to a single workspace to collaborate on AI quality.

Sign up free ⟶


Or try open source ⟶

You might also like

No items found.
🎓 Free course on LLM evaluations for AI product teams. Sign up

Get Started with AI Observability

Book a personalized 1:1 demo with our team or sign up for a free account.
Icon
No credit card required
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.