Hacker News new | past | comments | ask | show | jobs | submit login

Honestly that's worse than I thought. I work in the field, particularly in relation to accountable AI, and it's not OK to have models that tell you whether to check on people to make sure they're not dying unless there is also a human checking every case, which I hope is what's going on. How would you like to be different than the training data and deemed "no risk, 100% confidence" when you actually have a life threatening problem?



In a hospital setting, nurses and doctors round regularly. No one is talking about using AI as a replacement for that, because no one has anything approaching that much trust in predictive models.

Predictive models are most often used as either an alerting mechanism or an additional data point on a dashboard. You need to careful of alert fatigue, where too many false positives cause humans to disregard all alerts from the model. And if you don’t get people ignoring alerts, you can waste a lot of people’s time and energy by having constantly having them run to check on someone who is actually fine.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: