Hacker News new | past | comments | ask | show | jobs | submit login

Thanks for the support!

For the moment the workflows you describe are built externally: - To set notifications you can send the output from Evidently to other tools like Grafana and then build a notification workflow around it. If you have a batch model, you can use some workflow manager (like Airflow, or simply a cron job) to schedule a monitoring job at every model run and then log the results or send an email report. - Thresholds are manual. We learnt that the model owners usually have to tune them anyways since the models are very different (a small deviation in one model is nothing, in another is a disaster). But we plan to add the ability to generate default thresholds as the tool grows.

We are working on native integrations and tutorials for MLflow and Grafana in the next couple of weeks.

When you detect data drift, there are usually 3 options: - Retrain the model if you can (if you can label the data, for example) - Limit the model application (for example, tune the classification threshold, or exclude certain segments) - Pause the model or use a fall-back strategy (e.g. human-in-the-loop decision making)

Drift detection is really non-trivial when you have a lot of data (it will often show "drift" just due to the volume). We know some users need a solution for this use case, and plan to add something here.

Another aspect is that you have to be aware of which features are important for the model to not get too many false alarms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: