Hacker News new | past | comments | ask | show | jobs | submit login

There is one uniqueness to human cognition: the allowance to be fallible. AI will never be able to solve problems perfectly, but whereas we are forgiven that, they will not be because we've relinquished our control to them ostensibly on exchange for perfection.



It may interest you to know that machine learning algorithms often have an element of randomness. So they are allowed to explore failures. Two copies of the same program, trained separately and seeded with different random numbers, may come up with different results.

I'm not saying there will or won't be AI some day, I just thought that point was relevant to your comment.


Sorry, I found that a bit difficult to follow... it sounds like you think human programmers will beat AIs because we can make mistakes?

Well here's my counter: mistakes either make sense to make or they don't. If they make sense to make, they aren't mistakes, and AIs will make the "mistakes" (e.g. inserting random behavior every so often just to see what happens--it's easy to program an AI to do this). If they don't make sense to make, making them will not be an advantage for humans.

At best you're saying that we'll hold humans of the future to a lower bar, which does not sound like much of an advantage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: