Hacker News new | past | comments | ask | show | jobs | submit login

Why? Is it really that bad of an analogy for an absolute beginner?



It's not terrible for an absolute beginner but it's fairly harmful overall. People tend to use this analogy a lot to conflate specific AI with general AI and argue for regulatory capture based on things completely outside of evidence. The real brain is sparsely connected and has multiple activation networks that reuse nodes. We also have the ability to train from single examples to things we've never seen before so it seems unlikely that our brain operates exclusively by derivatives on error or other data-fitting techniques. Humans still seem more unreasonably effective than deep learning on many tasks and this is despite having a harder problem (Humans have more tasks with unlabelled data as far as I can tell).


I think so. Primarily due to Djikstra's anti-anthropomorphic stance, which is very important here.

1. as the other poster to you noted, people are more apt to conflate "strong AI" with what we're actually doing with tensorflow, leading to very weird overreactions that aren't germane.

2. just as importantly, developers who believe this line of thinking are biased against a more correct understanding of their code, which makes debugging much more difficult and prevents advances in the underlying technology.

The implied abstraction ... is however beyond the computing scientist imbued with the operational approach that the anthropomorphic metaphor induces. In a very real and tragic sense he has a mental block: his anthropomorphic thinking erects an insurmountable barrier between him and the only effective way in which his work can be done well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: