Visualizations are helpful when our goal is to explore or share information.
But we might also want to simply log the numeric results of the feature and dataset drift tests elsewhere. For example, we want to log a drift value as a result of an experiment.
Or, we want to track it for a model in production. As soon as we define our drift conditions, we can monitor if they are met. We want to get a boolean response whether or not drift has occurred in production, or trigger alerts based on a certain threshold.
To log the drift results, we can use Mlflow tracking. It is a popular library for managing the ML lifecycle. In this case, we use Evidently and our custom function to generate the output (the dataset drift metric) and then log it with Mlflow.
You can follow it in our example in the
Jupyter notebook. It is an easy one!
Here is how the results would look in the Mlflow interface. We have the dataset drift metric logged for each run.