I'm not devishard, but I parsed his statement slightly differently. He's not saying that Erlang makes things parallel magically. Rather, he's saying that Erlang forces tasks that /could/ be parallel to be parallel by default. Thus, Erlang will tend to maximize the sections of your program that are run in parallel compared to other languages.
Not really. Think about every object you'd have in Java that's being passed around your system. Now imagine each of those objects are their own processes and you're passing around references to them.
Just on that one case, you've taken huge chunks of a linear execution pattern and parallelized it. Now make that your norm and amplify it to everything. Now realize that the message passing allows this mode of operation to spread each part of this workload over not only more cores but more machines across the network.
And then realize that you can deploy updates to this codes individual parts while other parts continue running without taking down the whole system.
None of what you said will prevent the need for waiting for the majority of numerical algorithms.
No one's disputing Erlang's prowess at parallelism. What the critic in this thread was saying was that you can only get Nx speedup on an N core processor for a limited set of algorithms. Most parallel algorithms will not fall in this category. Amdahl's law is a general truth - it doesn't matter what your architecture/language is. There is nothing special about Erlang that will make any parallel algorithm scale linearly with nodes.
I'm interpreting "embarrassingly parallel" to mean that it's obvious the task can be parallelized, and I'm saying that many tasks where this isn't obvious in a more serial language are obvious in Erlang.
No, I'm not claiming Erlang breaks Amdahl's Law. I'm claiming that Amdahl's Law applies less often than people think it does.