Even if you get the labels quickly, keeping tabs on the data drift can provide an extra signal.
Model performance doesn't always change overnight. The quality might be trending down but stay within the expected range. If you only look at the direct performance metric, it would not give you enough reason to intervene.
In this tutorial
, we showed how the statistical data drift appears during the first week of the model application, while the model quality metric still looks reasonable. In cases like this, you might look both at the data drift and model performance. Data drift monitoring can provide additional information about the model quality trend.
The goal is to catch the decay early and learn about the possible change even before the performance takes a hit.
Tracking drift as an additional metric can, of course, increase the number of false-positive alerts. However, if you operate in a high-risk environment when the model mistakes can have serious consequences, you might err on the side of caution. It's better to deal with an extra alert than be late to react.