> Conpig threads still can only run on one core of a processor.
The disillusionment caused by having so many options for non-parallel "concurrency" in Python is, I believe, feeding the high defection rate from Python to Go.
We've had processes, threads and greenlets for a while now... if anything the problem isn't that there are no options, but too many options that require understanding to choose and apply.
Many of the people complaining about this issue don't have a demonstrated problem and could try any simple approach first (if the point is not just to slam Python in favor of something else, from the beginning).
This type of response is why I gave up on Python entirely. Not to pile on you pekk, it isn't your fault, but it is a tone... defensive apologist... "first of all there is no problem, and if there was a problem... it is that Python is too awesome"
As someone who has had to ship stuff using multiprocess & gevent to actually meet real world scaling needs -- and integrating them with C code and communicating to a C++ application via ZMQ (inprocess by sharing the object) ... the sad fact is once we started to tackle really hard problems in Python that aren't pre-solved via a nice library all those early advantages fell away and we craved the blessed simplicity of C++ (note: sarcasm, C++ isn't simple, but it was far simpler than the Frankenstein's monster we built).
MetaCosm made the point quite eloquently, but let me juxtapose "too many options that require understanding to choose and apply" with Go, which has exactly one option, which requires no special understanding to choose and apply, and gives one exactly what one wants in basically any situation.
I am not really a Go proponent. I'm a Haskell user, personally, and Haskell, like Python, has three or four options that require understanding to choose and apply. The difference there being that in Haskell, each one of them actually gets you real parallelism, no fine print necessary. I bring it up to point out that the situation with Python is not a good example of what you might call "intrinsic complexity" (as you seem to be implying) or the Go solution would not be so much simpler, nor is it really an example of there being many better higher-level abstractions, or more of them would resemble Haskell's many high-level options. It is simply a bad situation that produces many poor kludges, and the mentality that everything is fine is (in my opinion) feeding a substantial defection rate to Go.
As one of those defectors... it isn't so many options, it is that they all have piles of gotchas and corner cases that can take weeks (months!) to debug in complex environments.
Yes, you absolutely can use all your cores by combining multiprocess, gevent and custom C code. But, debugging that stack of a level of hell I will never return too, ever.
And the numpy/scipy/ipython notebook/scikit-learn axis is driving a lot of python adoption these days. Look in any blog about beginning data mining/machine learning, odds are it'll say learn python or R.
That's true, and not incompatible with what I'm saying. I'm saying that the majority of defection from Python I hear about is defection to Go. A lot of that seems to be taking the form of loud and proud blogging, so we hear about it here. The defection rate from Python is almost certainly much lower than its overall growth rate, or it would be dying (Perl). I'm just saying, it's not an accident that people are leaving Python for Go. They want cheap and easy parallelism and concurrency. Python is great at many things, maybe even most things, but the state of affairs with parallelism is very weak. Python can expect to continue to lose people to Go until this is addressed in a real, tangible way that doesn't sound like excuse-making. I don't really expect the situation to change, because Python doesn't need it to, but the excuse-making is annoying.
The disillusionment caused by having so many options for non-parallel "concurrency" in Python is, I believe, feeding the high defection rate from Python to Go.