Hacker News new | past | comments | ask | show | jobs | submit login

That's not how it works: it's a trained neural network which presumably was trained with as little bias as possible. Try other statements related to the two candidates and you'll find your statement is patently false.



On the contrary, it was almost certainly computed using supervised training. Some set of people must have selected and labelled the training data. Their biases are cooked directly into the resulting software.


Actually neural networks are notorious for having biases. It's ignorant to think that just because it's a machine making the decision instead of a human that it's automatically a fair decision. Google is actually researching the problem of biases in neural networks: https://research.google.com/bigpicture/attacking-discriminat...


The word "presumably" is probably the point where our interpretations diverge. I don't trust Google's black box AI to be both intentionally and effectively trained in a neutral manner. Further, I don't even think neutrality can exist within subjective filtering as the concept of neutrality itself is perceptually relative.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: