Hacker News new | past | comments | ask | show | jobs | submit login

Yeah unfortunately it often ends up being a code for adjusting ML models to support certain world views or political biases.

It's too bad we haven't been able to separate the data science questions of how we feel about the training data, from the operational questions of whether (a) it's appropriate to make a determination algorithmically and (b) whether the specific model is suited to that decision. Instead we get vague statements about harms and biases.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: