Give us a star ⭐️ on GitHub to support the project!
🚀 Join us January 25 for the Evidently monthly demo and Q&A. Register now →
Want to read this as a PDF instead?
By signing up you agree to receive emails from us. Opt out any time.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Want to read this as a PDF instead?
By signing up you agree to receive emails from us. Opt out any time.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Want to read this as a PDF instead?
By signing up you agree to receive emails from us. Opt out any time.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
📚 The Big Book of ML Monitoring. Get your PDF copy
November 12, 2021
Last Updated:
April 27, 2023

Evidently 0.1.30: Data drift and model performance evaluation in Google Colab, Kaggle Kernel, and Deepnote

Evidently
OPEN-SOURCE ML MONITORING
Evaluate, test and monitor your ML models with Evidently.
START ON GITHUB
ML IN PRODUCTION NEWSLETTER
Best MLOps content, blogs, and events.
Thank you! Please check your email to confirm subscription!
Oops! Something went wrong while submitting the form.

We are excited to announce our latest release!

TL;DR: Now, you can use Evidently to display visual reports not only in Jupyter notebook but also in Colab, Kaggle, and Deepnote.

How does it work?

Pretty much the same way as it does in the Jupyter notebook.

If you want to generate a report in Colab, for example, you need the following.

1. Install Evidently. This time, you do not need to install nbextension we used to display the dashboards inside the Jupyter notebook. Just run:

!pip install evidently

2. Import the necessary Evidently tabs and your dataset.

We expect you to use your model application logs. For demonstration purposes, we just take the Iris dataset and imitate data drift analysis.

import pandas as pd
import numpy as np

from sklearn import datasets

from evidently import ColumnMapping
from evidently.report import Report
from evidently.metric_preset import DataDriftPreset

iris_data = datasets.load_iris(as_frame='auto')
iris_frame = iris_data.frame

3. Calculate the drift report:

data_drift_report = Report(metrics=[
   DataDriftPreset(),
])

data_drift_report.run(current_data=iris_frame.iloc[:60], reference_data=iris_frame.iloc[60:], column_mapping=None)
data_drift_report

4. To display the dashboard, run:

iris_data_drift_report.show()

Here you go!

Evidently in Colab
Evidently Iris Data Drift example on Google Colab

To browse the examples on Colab, see sample notebooks listed in the Evidently documentation.

If you use Kaggle or Deepnote, you should explicitly specify a mode parameter as "inline."

iris_data_drift_dashboard.show(mode='inline')

We will also gradually add support for more environments.

If you are reading this blog from the future, be sure to check our official documentation for the latest updates!

When should I use Evidently in the notebook?

Ad hoc visual analysis does not scale well when you have multiple models in production and need to monitor them in an automated fashion.

Still, it can be pretty helpful for the following:

  1. Evaluate the tool if you are new to Evidently. Just come check out our examples in the documentation and get a taste of what Evidently can do.
  2. Evaluate an ML model before deployment. Even drift analysis can be handy before you deploy a model in production. You can do that to learn the past patterns and define your monitoring strategy. You can also evaluate the performance of the model: for example, to choose between the two models with similar performance.
  3. Report-based monitoring. You can generate regular reports, for example, weekly, to check on your model performance. It might help keep tabs closely on a new model or share the summary with other stakeholders. If you have batch models, you might not need live dashboards at all: just schedule the Evidently reports with a tool like Airflow.
  4. When you are debugging the model decay. If the model drift is detected, you need to drill down the root cause and figure out how to address it. Ad hoc visual analysis is often the first step. You can use the pre-built Evidently reports as a starting point.

And if you want the live dashboards, don't miss out on our recent Grafana integration.

Sounds good. How can I try it?

Go to Github, pip install evidently, and give it a spin! And a star, if you like it!

If you want more detailed documentation and more examples, here they are.

[fs-toc-omit] Want to stay in the loop?
Sign up to the User newsletter to get updates on new features, integrations and code tutorials. No spam, just good old release notes.

Subscribe ⟶

That's an early release, so let us know of any bugs! You can also open an issue on Github, or post in our Discord community.

https://www.linkedin.com/in/elenasamuylova/
Elena Samuylova

Co-founder and CEO

Evidently AI
https://www.linkedin.com/in/emelidral/
Emeli Dral

Co-founder and CTO

Evidently AI

You might also like:

November 12, 2021
Last Updated:
April 27, 2023

Evidently 0.1.30: Data drift and model performance evaluation in Google Colab, Kaggle Kernel, and Deepnote

Evidently
OPEN-SOURCE ML MONITORING
Evaluate, test and monitor your ML models with Evidently.
START ON GITHUB
Get EVIDENTLY UPDATES
New features, integrations, and code tutorials.
Thank you! Please check your email to confirm subscription!
Oops! Something went wrong while submitting the form.

We are excited to announce our latest release!

TL;DR: Now, you can use Evidently to display visual reports not only in Jupyter notebook but also in Colab, Kaggle, and Deepnote.

How does it work?

Pretty much the same way as it does in the Jupyter notebook.

If you want to generate a report in Colab, for example, you need the following.

1. Install Evidently. This time, you do not need to install nbextension we used to display the dashboards inside the Jupyter notebook. Just run:

!pip install evidently

2. Import the necessary Evidently tabs and your dataset.

We expect you to use your model application logs. For demonstration purposes, we just take the Iris dataset and imitate data drift analysis.

import pandas as pd
import numpy as np

from sklearn import datasets

from evidently import ColumnMapping
from evidently.report import Report
from evidently.metric_preset import DataDriftPreset

iris_data = datasets.load_iris(as_frame='auto')
iris_frame = iris_data.frame

3. Calculate the drift report:

data_drift_report = Report(metrics=[
   DataDriftPreset(),
])

data_drift_report.run(current_data=iris_frame.iloc[:60], reference_data=iris_frame.iloc[60:], column_mapping=None)
data_drift_report

4. To display the dashboard, run:

iris_data_drift_report.show()

Here you go!

Evidently in Colab
Evidently Iris Data Drift example on Google Colab

To browse the examples on Colab, see sample notebooks listed in the Evidently documentation.

If you use Kaggle or Deepnote, you should explicitly specify a mode parameter as "inline."

iris_data_drift_dashboard.show(mode='inline')

We will also gradually add support for more environments.

If you are reading this blog from the future, be sure to check our official documentation for the latest updates!

When should I use Evidently in the notebook?

Ad hoc visual analysis does not scale well when you have multiple models in production and need to monitor them in an automated fashion.

Still, it can be pretty helpful for the following:

  1. Evaluate the tool if you are new to Evidently. Just come check out our examples in the documentation and get a taste of what Evidently can do.
  2. Evaluate an ML model before deployment. Even drift analysis can be handy before you deploy a model in production. You can do that to learn the past patterns and define your monitoring strategy. You can also evaluate the performance of the model: for example, to choose between the two models with similar performance.
  3. Report-based monitoring. You can generate regular reports, for example, weekly, to check on your model performance. It might help keep tabs closely on a new model or share the summary with other stakeholders. If you have batch models, you might not need live dashboards at all: just schedule the Evidently reports with a tool like Airflow.
  4. When you are debugging the model decay. If the model drift is detected, you need to drill down the root cause and figure out how to address it. Ad hoc visual analysis is often the first step. You can use the pre-built Evidently reports as a starting point.

And if you want the live dashboards, don't miss out on our recent Grafana integration.

Sounds good. How can I try it?

Go to Github, pip install evidently, and give it a spin! And a star, if you like it!

If you want more detailed documentation and more examples, here they are.

[fs-toc-omit] Want to stay in the loop?
Sign up to the User newsletter to get updates on new features, integrations and code tutorials. No spam, just good old release notes.

Subscribe ⟶

That's an early release, so let us know of any bugs! You can also open an issue on Github, or post in our Discord community.

https://www.linkedin.com/in/elenasamuylova/
Elena Samuylova

Co-founder and CEO

Evidently AI
https://www.linkedin.com/in/emelidral/
Emeli Dral

Co-founder and CTO

Evidently AI

You might also like:

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.