contents
When 2021 kicked in, the Evidently library was just about a month old. We launched the first version of the open-source tool on November 30, 2020. It could only generate one single visual report on the Data Drift back then.
A lot has happened since!
We started Evidently with the vision of building a comprehensive open-source ML monitoring platform. We reviewed our experience on running production ML systems, spoke to dozens of machine learning teams, and painted a thorough picture of all the workflows a monitoring tool might need to cover.
But we had to start somewhere! As the first step, we released a Data Drift detection tool that runs a set of statistical tests and automatically generates an interactive visual report. In 2021, we went to build on top of it.
First, we added more reports, including Categorical and Numerical Target Drift, Regression Model Performance, Classification Model Performance (and Probabilistic Classification, too). They covered more aspects of the model performance, both for when you have the ground truth labels and not.
Visual reports are great for ad hoc analysis, debugging, or manual checks for batch models. But they are, of course, less suitable for the frequent production model runs. So our next move was to make it easier to integrate Evidently in the prediction pipelines.
We released the JSON profile feature that generates a summary profile with metrics and statistical tests results. In a nutshell, it is a text version of the interactive Evidently reports, available in a format that one can easily plug into existing pipelines. For example, you can use Evidently with MLflow or Airflow to run model performance and data checks as a batch job and build conditional workflows based on the outcomes.
After that, we addressed the more real-time workflows. Through integration with Grafana and Prometheus, you can use Evidently to get live monitoring dashboards running the continuous evaluation checks on top of a production data stream.
We then continued to develop Evidently with all three formats in mind:
On the reporting side, we added support for other notebook environments. You can now run the Evidently reports in Colab, Kaggle, or Deepnote.
In our most recent release, we added various customization options. Great defaults are helpful but never perfect. You can now make small and large changes to the Evidently with ease: swap report contents, statistical tests, add custom widgets and tabs. You can adapt it to your use case or even create a new report to cover a new aspect of model performance while reusing the underlying Evidently framework.
As an open-source tool, we have minimal visibility into the actual users. Proxy metrics become important in this context! The number of GitHub stars is one of them, as it helps to see the community awareness and support for the project.
We are now approaching 2000 GitHub stars on our project repo. Thanks to everyone for expressing their support and opening issues or even contributing to the project!
In summer 2021, we joined Y Combinator, the leading global startup accelerator. It was a huge honor to get on the list of companies like Stripe, PagerDuty, Dropbox, and Gitlab that went through it before us.
During the short three months, we worked with our peers and group partners to help us set Evidently on a course from a small open-source tool to a high-growth startup. This included launching the tool on Hacker News and getting into top-3 products of the day on Product Hunt.
Even though it was all remote for us, YC is an experience we recommend to every startup. It sets the right mindset of quick iterations and listening to your users' needs. It also puts you in a cohort of other startups like you and an extensive network you can learn from in years to come: which we hope to rely on as Evidently grows!
Creating content is a huge part of our work. We collaborate on tutorials and blogs, give talks at meetups and conferences, and build up our documentation and examples to make it easy to onboard and start using the tool. We even started a Youtube channel this year!
We do that to spread the word about Evidently, but no less importantly—to get the community's input. It has been exciting to see the conversation starting around some of the blogs we create. Many reached out to share more about the problems they face around the continuous evaluation of ML models and how they solve them. If you want to share your thoughts here, please do too!Here are some of the most-read blogs this year:
Since the very beginning of our work on Evidently, we looked to connect with our potential users and everyone interested in challenges around running ML systems in production.
We held a lot of conversations 1:1 via zoom calls, GitHub discussions, Linkedin chats, and various communities we take part in. To continue doing it scale, we recently launched our Discord community! Over 300 people have already joined the conversation.
While it is the right place to chat about new releases, share the feedback and get support on using Evidently, we don't want it to focus exclusively on the tool. It is hard to divide the discussion about things like data drift from a broader context. How to build robust ML models and maintain their performance? How to integrate monitoring with other tools in the stack? How to evaluate the model in the business context?
We hope to grow the Evidently community into a place to chat about all things production machine learning. So even if you are not a user yet, come hang out in the #ds-ml-questions channel, share and find useful content, and get an early preview into what we are building next: join us here!
For a significant part of the year, contribution to Evidently was not straightforward. On the one hand, the tool was in active development, and we frequently refactored the code. On the other hand, there was no proper technical documentation, contribution guide, or tests. These things take time to build.
We addressed many things towards the end of the year (and some are yet to be covered!) but are excited for the contributions we already see coming in!
If you would like to take part, there are two ways to join. First, you can contribute to the core codebase by implementing new features, or adding tests, for example.
Second, you can build your custom reports on top of Evidently, and we will showcase them in the gallery of community examples. You can share how you combine different evaluation metrics and tests for a particular industrial use case or create new report types.
We are very grateful for the contributions we already had (and the ones the users are working on!), and we hope to see more in the next year!
Last but not at all least: a full-time team is now working on Evidently, and it is growing!
A couple of months ago, Olga joined the team as a Senior Data Scientist and Vyacheslav as a Senior Software Developer. We are thrilled to have them bring years of experience in working on applied machine learning solutions and building large-scale software systems. We expect one more developer to join us in January, and more hires to be made throughout the year.
This has been an instant multiplier to our ability to ship new features. We have been releasing updates every week and now have a thorough collection of example notebooks to show key features and make it dead easy to start working with Evidently.
2021 has been a busy year, but there is a lot ahead!
Expect even more functionality to be released soon that would make Evidently a complete open-source ML monitoring platform, more high-quality content, and tutorials.
Here are a few things we will work on from the very beginning of the next year:
Have a thought on what we should add first? Do tell us by opening a GitHub issue with a feature request or writing us on Discord. Want to contribute to Evidently? Please come and share what you'd like to add!
We thank everyone for supporting us and being part of the community! If you want to follow our journey, subscribe to our newsletter, and join our Discord.
Here's to more great things in 2022!
Try our open-source library with over 20 million downloads, or sign up to Evidently Cloud to run no-code checks and bring all the team to a single workspace to collaborate on AI quality.
Sign up free ⟶
Or try open source ⟶