Hacker News new | past | comments | ask | show | jobs | submit login

Yes. Machine learning models learn from the data they are fed. Thus, they end up with the same biases that humans have. There is no "natural" fix to this, as we are naturally biased. And even worse, we don't even all agree on a single set of moral values.

Thus, any techniques aiming to eliminate bias must come in the form of a set of hard coded definitions of what the author feels is the correct set of morals. Current methods may be too specific, but ultimately there will never be a perfect system as it's not even possible for humans to fully define every possible edge case of a set of moral values.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: