contentsâ
Did you miss some of the latest updates at Evidently open-source Python library? We summed up a few features we shipped recently in one blog.Â
All these features are available in Evidently 0.4.11 and above.
We also send open-source release notes like this in the newsletter every couple of months. Sign up here.
You can now evaluate and monitor your ranking and recommendation models in Evidently.
Whatâs cool about it?
We covered not only standard metrics like Normalized Discounted Cumulative Gain (NDCG) or Recall at top-K but also behavioral metrics like Serendipity or Popularity Bias.
Learn more:
You can set Warnings for non-critical Tests in a Test suite. If you want to get a âWarningâ instead of âFailâ for a particular test, set the âis_criticalâ parameter to False.
Whatâs cool about it?
You can flexibly design alerting and logging workflows by splitting the Tests into groups: for example, set alerts only on critical failures and treat the rest as informational reports.Â
Are you computing Test Suites on a cadence? You can now add a new type of monitoring panel to track the results of each Test Suite in time in the Evidently UI.
This is in addition to all the panels that help visualize the metric values. You can also choose which subset of tests to show together using tags. Say, you can add one monitoring panel to track failed data quality checks, another for data drift, and so on.   Â
âWhatâs cool about it?
You can choose a detailed view option. It will show not just the combined results but also a granular breakdown of all tests, such as which exact features drifted.Â
âLearn more:
You can deploy an Evidently collector service to integrate with your ML service.
In this scenario, you can POST your input data and model predictions directly from your ML service. The Evidently service will collect online events into batches, create Reports or Test Suites over them, and save them as snapshots you can later visualize in the monitoring UI.
Whatâs cool about it?
No need to write Python code or manage monitoring jobs: you can define the monitoring setup via a configuration file. Â
Learn more:
You can finally run data drift calculations on Spark.
We currently support only some of the drift detection methods on Spark, but weâll be adding more metrics over time. Do you know which metrics youâd want to work on Spark next? Open an issue on GitHub to tell us.Â
Whatâs cool about it?
If you deal with large datasets â your life is now much easier!
Learn more:
Our Monitoring UI is getting better day by day!
You can now browse tags in the interface as you look for individual Reports or Test Suites, easily switch between different monitoring periods, view metadata, and more!
Learn more:
You can show the feature importances on the Data Drift dashboard.Â
This will help sort features by importance when viewing the data drift results.
Whatâs cool about it?
You can pass the feature importances as a list. But if you donât â Evidently can train a background model and derive the importances from it.Â
Learn more:
Do you want to use the Evidently Monitoring UI without self-hosting? Evidently Cloud is currently in private beta. Sign up here to join our early tester program.
Sign up to the User newsletter to get updates on new features, integrations and code tutorials. No spam, just good old release notes.
Subscribe âś