It might not be as bad as it sounds, though. Let's consider both cases.
Positive interpretation: model handles drift well!
- Important features changed. Yes, you are observing the drift. There is a meaningful change to the real-world phenomenon, and the model is operating in a non-typical setting. But…
- The model is robust and reacts well. You don't always expect that from a machine learning model that only learns from available observations. But if built right, it can adequately respond to the changes. For example, it will monotonously increase the sales forecast following lowered prices. You might even incorporate such assumptions into your machine learning system design.
In this case, there is no need to intervene. Yes, the reality changed, and the model predictions too, but the model behavior follows the expectations. Like in our fictitious example, when an e-commerce system starts up-selling sunglasses in response to the sunny weather outside. If changes continue accumulating, you might need to calibrate or rebuild the model, but for now, it's good to go!
Negative interpretation: things have gone rogue!
That is probably the first idea to cross your mind when all alarms fire. And that might be true sometimes.
- Important features changed. Yes, the reality is new to the model. This was not seen in training or earlier history. And…
- The model behavior is unreasonable. It does not know how to react to these changes, and the predictions are low quality and do not adapt in a "logical way." Probably we will see an increasing model error rate very soon.
In this case, you need to intervene. You should
start from the investigation of causes and choose the appropriate action: solve the data quality issues, retrain or rebuild the model.