Hacker News new | past | comments | ask | show | jobs | submit login

There are ways to account for that. A model can be fit to race, and then you only predict "on top" of race (meaning residuals). You use that model, which is independent of race.



This only works if you're aware that race is even a factor. If you're not aware of the problematic factors, then you can't correct for them.


How do you know you have the correct model, and isn't making the system more racist instead of less?

Machine learning is also very opaque.


But if the racial factors aren't all the same, then that creates an incentive for people to lie about their race.

If you verify the race field, then now you're in the business of enforcing racial definitions.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: