Evidently 0.1.30: Data drift and model performance evaluation in Google Colab, Kaggle Kernel, and Deepnote

November 15, 2021
Evidently with Colab, Deepnote and Kaggle
We are excited to announce our latest release!

TL;DR: Now, you can use Evidently to display dashboards not only in Jupyter notebook but also in Colab, Kaggle, and Deepnote.

How does it work?

Pretty much the same way as it does in the Jupyter notebook.

If you want to generate a report in Colab, for example, you need the following.

1. Install Evidently. This time, you do not need to install nbextension we used to display the dashboards inside the Jupyter notebook. Just run:
!pip install evidently
2. Import the necessary Evidently tabs and your dataset.

We expect you to use your model application logs. For demonstration purposes, we just take the Iris dataset and imitate data drift analysis.
import pandas as pd

from sklearn import datasets

from evidently.dashboard import Dashboard

from evidently.tabs import DataDriftTab

iris = datasets.load_iris()

iris_frame = pd.DataFrame(iris.data, columns = iris.feature_names)
3. Calculate the drift report:
iris_data_drift_report = Dashboard(tabs=[DataDriftTab])

iris_data_drift_report.calculate(iris_frame[:100], iris_frame[100:], column_mapping = None)
4. To display the dashboard, run:
iris_data_drift_report.show()
Here you go!
Evidently in Colab
Evidently Iris Data Drift example on Google Colab
You can also just go and see the Iris Data Drift example on Colab.

To browse other examples on Colab, see sample notebooks listed in the Evidently documentation.

If you use Kaggle or Deepnote, you should explicitly specify a mode parameter as "inline."
iris_data_drift_dashboard.show(mode='inline')
We will also gradually add support for more environments, such as Pylab and Jupyter lab.

If you are reading this blog from the future, be sure to check our official documentation for the latest updates, or just try it out!

When should I use Evidently in the notebook?

Ad hoc visual analysis does not scale well when you have multiple models in production and need to monitor them in an automated fashion.

Still, it can be pretty helpful for the following:

1. Evaluate the tool if you are new to Evidently. Just come check out our examples in the documentation and get a taste of what Evidently can do.

2. Evaluate an ML model before deployment. Even drift analysis can be handy before you deploy a model in production. You can do that to learn the past patterns and define your monitoring strategy. You can also evaluate the performance of the model: for example, to choose between the two models with similar performance.

3. Report-based monitoring. You can generate regular reports, for example, weekly, to check on your model performance. It might help keep tabs closely on a new model or share the summary with other stakeholders. If you have batch models, you might not need live dashboards at all: just schedule the Evidently reports with a tool like Airflow.

4. When you are debugging the model decay. If the model drift is detected, you need to drill down the root cause and figure out how to address it. Ad hoc visual analysis is often the first step. You can use the pre-built Evidently reports as a starting point.

And if you want the live dashboards, don't miss out on our recent Grafana integration.

Sounds good. How can I try it?

Go to Github, pip install evidently, and give it a spin! And a star, if you like it!

If you want more detailed documentation and more examples, here they are.
That's an early release, so let us know of any bugs! You can also open an issue on Github, or post in our Discord community.

Want to stay in the loop?
Co-founder and CTO
Co-founder and CEO

You Might Also Like: