Hacker News new | past | comments | ask | show | jobs | submit login

There is no ultimate source of truth so I don't see how AI/ML could make that distinction.



Yeah I agree. This is another example of our own biases/errors being injected to the data, thus poisoning any models we try to build with it. On the surface it seems unlikely there's much of a way to compensate for that, and if there was, it would probably have to be tailored to every specific type of bias/error.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: