Hacker News new | past | comments | ask | show | jobs | submit login
Python's Hardest Problem (2012) (jeffknupp.com)
97 points by hartleybrody on June 3, 2013 | hide | past | favorite | 70 comments



This was FUD when it was posted and now it is age-old FUD which there's no point in re-posting.

Jython and IronPython do not have a GIL. Multiprocessing avoids the GIL. Blocking on I/O gives up the GIL. There are all kinds of techniques used instead of throwing threads naively at every problem. And, conveniently, none of this is mentioned in the article. Either the author was not aware of these basic facts, or suppressed them.

It is blatantly false that "no single issue has caused more frustration or curiosity for Python novices and experts alike than the Global Interpreter Lock." The author may consider it important, but this does not mean that author is speaking for everyone else.

Novices would have good reason to avoid shared-everything threading, which introduces piles of race conditions and difficulty controlling runaway threads, and should try simpler tools first and see whether they can get good results instead of prematurely optimizing with techniques they don't know how to use.

Experts will know that the GIL is often not a primary concern, and where it actually is a concern they'll be conversant with other tools like multiprocessing and task queues.

The people with the most to say about the GIL are mediocre programmers who want to show off that they are so good Python is limiting them, and people not very familiar with Python (possibly with background in languages which try to make threads the answer to everything) who have an axe to grind.

Instead of asking how it is to do what they want to do, they just assume that the problem is the GIL and there is no solution, then expect to be praised for their technical acumen. People with technical acumen just solve the problem in any of the available ways instead of bitching in public about how it's the tool's fault they can't solve the problem because they have defined the problem incorrectly and insist on some arbitrary way of doing it.


I'm the original author (and didn't post this) but I fail to see how the article is FUD. In fact, the article makes the exact same point as your fourth paragraph ("Novices would..."). Controlling access to shared data often does lead to issues for many programmers, and just throwing threading at a problem is rarely a good idea.

I'm sorry you didn't find the article useful, but perhaps you're not in the article's target audience. I simply wanted to give Python novices some information and background about the GIL.

That said, the fact that other implementations do not have a GIL isn't relevant to the article; it specifically refers to the cPython implementation. And your observation that multiprocessing avoids the GIL is explicitly mentioned in the article. To say "blocking on I/O gives up the GIL" is true in a very narrow sense but not very interesting. Import any third party package using C extensions and you now need to worry about how well the author manages the GIL.


There has been talk of criticism on HN about how comments are negative and that is true. So I hope this doesn't add to it and I am trying to honest here and say how I see things. First of all thanks for putting the effort into writing that. There is good information in the article describing how the GIL works and how threads work and things to watch out for.

But unfortunately I have to agree with the gp as well, a part of it is FUD. The FUD is not in what was said (most what was said about the GIL is true) the FUD is in what was not said, and that is that threads are there for a reason, and they do provide good speedup in a large number of applications, namely those that are IO bound.

I don't think that fact is too hard for novices to grasp so it should be me mentioned. The perception otherwise is that the creators have simply gone insane and decide to add threading to the language but you should never use it because it doesn't work (so why didn't they just remove it then?). Well it does work in large class of problems.

Think about nodejs. Like it or not it has become popular recently. It doesn't by default support or take advantage of multiple cores yet it is often found to be performant enough to handle a decent number of concurrent clients connecting. Granted it doesn't pretend to have threads in the first place but it is an example of a modern, useful ecosystem that does not take advantage of all the cores.

> To say "blocking on I/O gives up the GIL" is true in a very narrow sense

Not it is not. It is true in a very general and wide sense. Try it out! Spawn 50 threads and try to download 50 different pages, you'll notice a speedup relative to doing it all sequentially in a while loop.

> Import any third party package using C extensions and you now need to worry about how well the author manages the GIL.

Also misrepresents the facts a bit. GIL actually is supposed to make it easier to write C extensions. If the C extension doesn't mess with the GIL it is straight forward to write because the GIL is there. If the GIL wasn't there calling C extensions and returning or having a callback in Python from C would be a pretty complicated affair.

Now you can play with the GIL in C and release it to achieve parallel speedup, and I have done that, but that was in a few cases already over the years.


Ignore the negativity train on Hacker News as of late. I thought it was a very well written and interesting article.

I was once one of the very noobies that your article describes. In fact, I even made the same Stack Overflow post asking why threading made my program slow to a crawl. I did then (thanks to a friendly answer) learn about multiprocessing rather than threading, but I never actually got around to learning -- or even thinking about!-- what exactly the GIL is and why it makes threading terrible for CPU bound tasks.

Point being, I enjoyed it.


"I never actually got around to learning -- or even thinking about!-- what exactly the GIL is"

David Beazley does a good job on explaining the GIL. There are multiple videos of his GIL talks on YouTube. This one, for instance: http://www.youtube.com/watch?v=Obt-vMVdM8s

He has a page about this at http://www.dabeaz.com/GIL/


As a newcomer to Python (and programming in general), your article was very informative.

Thank you for writing it.


This kind of makes it sound like you have a very limited variety of experience programming. Two things:

1) Race conditions exist with any sort of asynchronous or concurrent software development, even when there is no parallelism or threading. Have you ever written a moderately complex JavaScript app? There's no shortage of race conditions and no threads either.

2) There are in fact cases where forking isn't a reasonable solution. For one of my previous jobs I developed a multi-threaded server with Python (before I learned about the GIL). The problem with forking came in the fact that there was one particular variable that needed to be shared between threads for processing. It happened to contain about 5GB of data, memory which I didn't want duplicated and which would have been a pain in the butt to keep synced between processes.

For most of these shared data structures I could do multiple processes and just use Redis as a shared data store. However, this piece of data was accessed far too frequently and that would have just been a major performance bottleneck.

To say the least, I was terribly frustrated when I learned about the GIL. I wish they would put a big warning up on their home page or at least in the docs relating to threading.


multiprocessing is just pretty broken in general. I've never been able to get it to work well in FreeBSD in particular. Essentially python calls all sorts of unsafe routines after the fork, here's an example bug report:

http://bugs.python.org/issue3770


You're conflating the engineer/scientist role here pretty severely.

The engineer routes around the problem, the scientist tries to understand the problem better. An engineer with a deadline will not be held up in any significant way by the GIL. The engineer will use one of the many ways you've already listed to solve the problem and move on.

The scientist, not bound as strictly by deadlines or by "just being done", will explore the issue more leisurely, focusing on the whys and what the problem means. The scientist will explore the implications.

I think only you are attempting to "blame" Python for this problem - the author's tone in no way suggests that the problem is in Python itself or even the CPython implementation. There's a distinct, "here's the lay of the land on this issue" about this article, rather than a "Python is at fault for these reasons..." that you seem to be interpreting.


I'm an ex-scientist (>10 years) who is now an software engineer (>5 years). Everything below is anecdotal, but based on a pretty large sample of engineers and scientists combined with a highly skilled anti-bias corrector.

While I agree superficially in many ways with your characterization of engineers on deadlines versus scientists who will explore certain issues more closely, I think there are a number of things wrong with your comment:

1) As a scientist I frequently had to deal with the GIL. A major part of my time was taking C libraries, making SWIG wrappers for them, and using the Python SWIG libraries to do productive science. Once SWIG-wrapped, a good C library is very quick to do prototyping but it requires close attention and a good understanding of CPython API threading details to ensure your SWIG libraries are safe for other scientists.

2) Nor are "engineers on deadlines" going to accept the GIL in CPython. I'm an engineer on deadlines and after 18 years of programming in Python, I've switched to go because frankly, I can't see Python having much of a future anymore.

3) Ultimately, engineers and scientists are the same thing. 24% of the time of every scientist is spent coming up with some engineering solution in a hurry so you can get your experiment to work, (the other 75% of the time is spent writing grant proposals begging for money for the engineering and writing papers so that your grant proposals get approved, and 1% actually feeling true scientific inspiration). The whole "leisurely" word you use doesn't apply to how scientists anywhere in the real world work (except perhaps somebody with awesome funding nearing retirement, or perhaps some wealthy citizen-scientist with a home lab).


Scientist in philosophy, not scientist in profession.

I would think of it like going to the moon. Curiosity took us there, but a lack of a financial motive to continue going there made future missions more difficult, to the point where we stopped going.

People have effectively gotten around Python's GIL for most engineering purposes. Those who remain aren't doing so because they have a problem to solve, they remain because the GIL is is the problem to solve. An end, rather than a means. This makes it... well you're right, not leisurely, but certainly more pensive an activity than trying to solve another problem and having to "overcome" the GIL. I doubt anyone these days would get paid to fully remove the GIL implementation in CPython.

And yes I get it, you're old as fuck and know more than everyone younger than you. Neat.


Multiprocessing is a terrible solution to the GIL. You gain parallelism, but you are then required to serialize/deserialize and duplicate every shared object.

In Scala/Java, I might build a single immutable object (taking up e.g. 1kb) and transmit it to 10 actors. They use it as needed and let the GC deal with it when finished. In Python, I need to serialize it, transmit it 10 times and use 10kb of memory to store the copies.

The GIL is a flaw in the language. We should accept that. There are workarounds and hacks, but the GIL is still a flaw.

(Incidentally, my background is in python. My usage of Scala/Java is far more recent.)


Technically, the GIL is a "flaw" in the implementation. The language does not specify a GIL, it's just an implementation detail in cpython.

I don't necessarily agree that it's a flaw, but that's another discussion entirely.


And in CPython it's less that the GIL is the flaw, but more the refcounted GC. Back in '96 there was a patch to remove the GIL, run of the mill single threaded python code ran 2-6x slower (largely dependent on threading implementation used) due to all the locking overhead around the refcount updates happening all the time. When you have a language that is perceived as slow already, making the vast majority of typical scripts of the time that much slower to allow MT python to be faster was going to be a very difficult thing to sell.


In this day an age, the inability to compute two things at once is a pretty major flaw, if you ask me.


Using Python for high performance computation is also a flaw, if you ask me.


> Multiprocessing is a terrible solution to the GIL. You gain parallelism, but you are then required to serialize/deserialize and duplicate every shared object.

"You" are not required to do the serialization. Multiprocessing does it automatically behind the scenes. The only time it becomes relevant is when something can't be serialised.

It is correct though that serialization consumes cpu and time while it is happening - something that doesn't happen when all actors are local to the process. However the moment you do serialisation you can also do it across machines, or nodes within a machine which gives far greater scope for parallelism assuming the ratio of processing work to size of serialised data is large.


However the moment you do serialisation you can also do it across machines, or nodes within a machine which gives far greater scope for parallelism assuming the ratio of processing work to size of serialised data is large.

You can also do that with Akka, for example.

It's true that you can't avoid serialization when you need to work across multiple boxes. That doesn't mean serialization and IPC should be forced upon you the minute you want to parallelize. There are a LOT of jobs that can be handled by 2-8 cores, provided your language/libraries give support for it.


Actual novices are more likely to get tripped up by such things as understanding mutable arguments to functions before they even try working with threads.


I've written native C modules interacting with EVE online, probably one of the biggest Python systems in deployment, and they run on stackless python with a bazillion of threads for every little thing. While still needing to hit that 50fps window.

The GIL was not a problem.


The point of Stackless python is that it avoids the GIL by not using threading. It's more about asynchronous behavior.


...because they run on Stackless Python.


This is a common misconception about Stackless Python.

Stackless still has the GIL - it facilitates concurrent, not parallel programming. Stackless Python programs run on a single core, with cooperative task switching between microthreads.

See http://stackoverflow.com/questions/377254/stackless-python-a...


It's not concurrent either. It's about asynchronous. Tasks that don't finish before another one starts and runs.

Why would you want this? From the website: * Improved program structure. * More readable code. * Increased programmer productivity.


This is "concurrent" in the increasingly common Erlang sense of the word:

"The real world is ’concurrent’. It is made up of a bunch of things (or actors) that interact with each other ina loosely coupled way with limited knowledge of each other." -- Grant Olson "Why Stackless" 2006

"Concurrent computing is a form of computing in which programs are designed as collections of interacting computational processes that may be executed in parallel." -- http://en.wikipedia.org/wiki/Concurrent_computing


> "And why has no one attempted something like this before?"

"However, if you've been around the Python community long enough, you might also know that the GIL was already removed once before--specifically, by Greg Stein who created a patch against Python 1.4 in 1996." (Also mentioned in the OP)

More info can be seen at http://dabeaz.blogspot.nl/2011/08/inside-look-at-gil-removal...


I don't know a lot about the history of Python and the GIL, but did they ever consider optionally removing the GIL or leaving it in depending upon some switch? Go has GOMAXPROCS which defaults to 1 to avoid the overhead of locking when only a single thread will be used for similar purposes.

I guess back in 1996 a system like that may have been considered overkill because multithreading was still pretty exotic.


Having a look at the source, you get this for free in python. If you don't use any threads the overhead is just the checking of one (non-atomic) variable in the main eval loop, so that is at least as cheap as GOMAXPROCS=1 in Go. There is a small overhead of dropping/taking the gil around I/O functions, but it isn't worth optimizing the few nanoseconds that takes for the much slower i/o functions.

But if even this overhead is too much you can compile python without treading support.


Ok, the above is not really relevant, I misread the parent comment. Why the gil is not completely removed or made optional can be read at http://wiki.python.org/moin/GlobalInterpreterLock


The overhead of the GIL is probably insignificant for a single threaded python program, in part because the GIL is very coarse grained. Go uses (afaik) finer grained synchronization that therefore has more overhead. Also since python is slower anyway the relative cost of locking is much smaller than in the faster Go.


I believe it was considered and rejected because CPython wants to be simple, understandable code (there's a general policy of rejecting performance-improvement patches if they complicate the code too much). Other implementations exist without a GIL (e.g. IronPython).


>using multiple threads to increase performance is at best a difficult task.

This isn't completely true. If you are doing anything non CPU bound, using threads is trivial, as the GIL will allow you to perform IO in parallel.


Performing I/O already releases the GIL.

(Not to mention that Python has many event-driven I/O options available which are generally more efficient than threading)


> Due to the design of the Python interpreter, using multiple threads to increase performance is at best a difficult task. At worst, it will decrease (sometimes significantly) the speed of your program.

Nope. The writer sounds misinformed and is spreading FUD.

I have successfully used Python's threads to perform concurrent database fetches, http page getters, file uploads in parallel. Yes, there was almost linear speedup.

If you listen to this story it sounds like Guido and most other talented and smart Python contributors added threads to Python just to fuck with people heads -- "thread don't work but let's add them anyway! just to mess with them!". Nope they added them because there are many cases when they work.

The answer is if you handle concurrent I/O Python's threads will give you good speedup. Threads are real OS threads and come with nasty side-effects if using shared data structures, but make no mistake you will get the speedup.

Your mileage may very and everyone is probably biased and has a different perspective, but where I am coming from in the last 10+ years I have written mostly I/O bound concurrent code. There were very few cases where I hoped to use extra CPU concurrency.

Now I did have to do that a couple of times and if you do have that issue, most likely you'd want to descend down to C anyway. Which is what I did. Once in C you can release the lock so Python can process in parallel and your C extension/driver can process in parallel. This is exactly what I did.

Now wouldn't it be nice if Python had CPU level concurrency built in. Yes it would be great. But I don't think that is the #1 issue currently. We still don't have 16 cores on most machines.

   #define RANT
What worries me is library fragmentation and the new Python 3 adoption (or lack of) now coupled with the new Async IO Future/Promise/Deferred framework introduction. That will harm Python faster and worse than GIL ever did. Adopting and standardizing a Twisted like approach to Async IO will put the nail in Python's coffin. And Guido is certainly marching in that direction. This will fragment existing (already rather fragmented) libraries. Now we'll have Twisted, Tornado, gevent, eventlet, asyncore, Threads, new Promise/Future thingie (anyone know of more?) ways of doing concurrent IO and every time you pick a library (unless you use threads, gevent or eventlet + monkey patching) you will end up choosing a whole new _ecosystem_ of frameworks.

I remember for years scouring the web for a Twisted version of an already existing library,because I had made the mistake of picking Twisted as the I/O concurrency framework. Regular library module is available, oh but I need to return a Deferred from it of course, in order for me to use it.

   #undef RANT


The idea of including an async API (not an implementation) in the standard library is that it will enable compatibility by having twisted/tornado/gevent/etc. implement this interface, and async libraries to conform to this interface. It's trying to solve the same problem you identify. (Personally I'm not terribly hopeful that it will work, but what would you suggest?)


>Nope. The writer sounds misinformed and is spreading FUD.

>I have successfully used Python's threads to perform concurrent database fetches, http page getters, file uploads in parallel. Yes, there was almost linear speedup.

It's not quite that simple - especially for the novice. Not every IO oriented library is thread safe (urllib2) and not every C extension remembers to give up the GIL, so no - threads do not always automatically get you increased IO throughput.

Worse - other languages do set you up to expect to be able to interleave IO and CPU bound code with threads. This frequently doesn't work well in Python. And finally - yes I have written CPU bound code and I think the techniques (processes, numerical libraries, Cython, alternative runtimes, etc) are sufficient to meet my performance needs it is a little annoying to feel that I'm giving up on 3/4 of the horsepower in my 4 core box by default...


"I have successfully used Python's threads to perform concurrent database fetches, http page getters, file uploads in parallel. Yes, there was almost linear speedup"

Yes, for that, the Python threading model works fine.

The problem is if you're doing a lot of processing on the threads and/or passing data between them.

For CPU concurrency, go with Multiprocessing, works like a charm (maybe using ZeroMQ between the processes)


Having ioloop in the stdlib could make the cooperation between different async libraries easier.


This a great overview - nicely done! It would have been nice to also mention how other implementations of Python like Jython that _don't_ have the GIL, and how they managed to do it.

As for why it hasn't been solved yet...the api for threads and processes is pretty much identical. Since you're just as well off using a process in the majority of cases, that's we we go with.


Author here.

Actually, the reason it hasn't been "solved" yet has much more to do with the cPython implementation rather than the fact that we can just use multiprocessing. There is a ton of globally shared data in the cPython implementation. Retrofitting a locking scheme granular enough to obviate the need for the GIL while at the same time not negatively impacting single-threaded performance is decidedly non-trivial.

The PyPy guys are making decent progress by attacking the problem from another angle: using software transactional memory to automatically resolve data conflicts that arise from multiple threads mutating data simultaneously.


This article is written as if shared memory is the only parallel programing paradigm. While it is true, that threads are a very useful construct for writing high performance parallel software, distributed memory programming is also a valid approach.

If your problem can map to distributed memory techniques, then you have multiple advantages over shared memory programming. Most importantly you can parallelize over multiple machines. Other advantages include decoupling of each parallel task from each other (fewer race conditions and other hard to debug problems).

There are several ways to achieve distributed memory parallelism in Python: multiprocessing, zeromq, raw tcp/ip sockets and mpi4py. Which approach makes sense to use will depend on your problem.


Neat discussion on this. Its interesting to look at stuff where threading was front and center (Go, Java, Etc) vs where threading was not (Python, Perl, Etc). The arguments are something you should get an introduction to at least with a CS degree and one of the things people without that explicit teaching develop by feel.

Concurrency is the 'tricky bit' of the 'algorithms' pillar [1].

[1] The Four arbitrary pillars of computing (algorithms, languages, systems, and data structures) I


Aren't algorithms and data structure one and the same? (If you squint just right..)


I'm glad I came across this article. I'm learning Python and was given that advice to use multiprocessing rather than threading, but hadn't researched why. Very informative, thanks for sharing.


Well hold on now; there are a lot of times when using threading is easier and faster than using multiprocessing. It depends on what you're doing.

Threading creates new OS-level threads, but whenever your code is being run by the bytecode interpreter, Python holds the global interpreter lock. This is released during I/O operations and a lot of the built-in functions, and you can release it in any C or Cython extension code you write. If you're running into Python speed bottlenecks, you can usually get significant speedups with very little effort by moving the bottlenecky code to Cython and maybe adding a few type declarations.

Multiprocessing spawns a pool of worker processes and doles out tasks to them by serializing the data and using local sockets for IPC. This naturally has a lot of overhead, and there's some subtlety with the data serialization. So, be aware of that. The nice part, though, is that you don't have the GIL, which can sometimes speed things up.


What about Google's Unladen Swallow project? I'm aware their attempt to remove GIL was aborted. Did any enhancements from those projects find their way into the mainline CPython3k?


Is there really that much Python code out there that is not IO bound (obviating the GIL)? Scientific computing is the only area that comes to mind where I can imagine problems.


I've written a fair amount of CPU-bound Python code. It quickly turned into CPU-bound Cython code. The most CPU-heavy parts release the GIL. It hasn't really been a problem.


I _love_ to use the Python parallel map function.


Disagree; Python's hardest problem is packaging, followed closely by bikeshedding over networking libraries and an ever-growing dichotomy between the Python 3 and Python 2 universes of code.

You can skip all conversation about the GIL and threads neatly by simply preferring a different concurrency model. There are plenty of ways to do this (see above re: bikesheds and their colors), but being permanently tied to CPython and the threading module is increasingly uncommon for professional Python, and it isn't as unavoidable as things like networking or even which language you're going to use.

Edit: I see that the author's in this thread. Nicely written article, but a tad hyperbolic.


There isn't a growing dichotomy, there is a migration from 2 to 3. We have passed critical library mass I feel in Python3 space and now it is just waiting for the world to catch up and just use 3 already.


The transition was always planned to take several years and I can't think of any major project which decided not to port to python 3. http://py3ksupport.appspot.com/ and https://python3wos.appspot.com/ both list package compatibility.

What issues do you see with python2 to 3 transition?


To be fair, the Python 3 transition timer shouldn't have had its tick starting until more recent versions of Py3. This won't be true for all Python communities (eg. Mathematics / Research), but until the more sane implementation of bytes/strings/unicode, the transition for most web frameworks or protocols was a mess.

However, from the later list, there are some really damning packages. For one, we're going to have transition off Twistd, which is much more than just an event loop. Py3 might not have enough advantage(s) to compel us to make that transition.


http://twistedmatrix.com/trac/milestone/Python-3.x

So, just wait to migrate until twisted is ready?


Honestly, I didn't check the dozen-plus packages pending update for us and then someone else cutting their teeth on it for a while, but Twisted was clearly marked as not being updated for Py3. But yes, clearly we're waiting; apparently Guido also wants to take a stab at the async batteries, which could help or hurt, if the Twisted contributors think they will be affected.

To the point of the comment you first responded to, there is lot of issues not being "solved" with respect to network libs, their upgrading to Py3, and Py3 looking to step into that arena. In other words, there is no horizon for networking libs updating, and they may feel even less motivated if the Py3 crowd is saying, "don't worry about it, because we are going to standardize our way." Does the Py3 transition-clock only start then?


I'm not really sure what your issue is. You use twisted. Twisted is currently being updated to support python3. Gevent has a branch which passes tests for python 3.3: https://travis-ci.org/fantix/gevent/builds/3588293 Is there some major library you are using that has no plans for python 3?

I'm not actually seeing an issue with python 2 -> 3 transition which is what my original comment was asking about. I see a lot of negativity surrounding python 3 and talks of fracture, but no actual evidence of it.


> I can't think of any major project which decided not to port to python 3

Most major projects have not actually switched to Python 3, that just support it using 2to3.


Actually, the trend is away from using 2to3 - it's now seen as preferable (and quite possible) to have one codebase which works on both 2 and 3. That's the route Django went for, and what Flask is doing. Numpy also switched from using 2to3 to supporting both from a common codebase.


So restricting yourself to a common subset of Python 2/3 and adding compatibility helpers in your code is the preferred approach now a days?

I'm still not convinced about the whole Python 2 to 3 conversion being worth it (asides from the fact that not porting means your project is seen as not being actively maintained). It has led to added complexity in code so that it works on both versions, increased difficulty in packaging/distribution, having to test/debug on both versions, obsolescencing vast amounts of code that will never be ported, confusion among new users, untold man-hours being spent, ...

I don't mean to be sound so down on Python 3, but it has caused me nothing but additional work and frustration. The supposed benefits of Python 3 are still years away.


The common subset is most of the language, if you target 2.7 and 3.3. But yes, it has clearly caused extra work. But all the tools I use support Python 3, and I'm looking forward to when new users routinely go for 3.


Absolutely agree.... in other words the community. To that I would add the ever growing dichotomy and fragmentation regarding improving performance between people solving this (CPython vs. PyPy vs. Continuum). I'd trade a python that was 5-10X faster (and could use numpy/scipy) over one that didn't have a GIL.


I don't think it's quite accurate to frame the current state of innovation in High-Performance Python as being one in which the principal actors are fundamentally opposed to each other. Nothing that we do at Continuum detracts from the CPython ecosystem; in fact, our tools are designed to specifically provide maximal utility, as quickly as possible, to certain (significant) groups of Python users. We're not tackling anything nearly as ambitious as PyPy's meta-tracing JIT or generic STM. Much of our work builds on tools and lessons learned over the last decade, including work on Cython, Mython, LLVMpy, etc. etc.

As for the GIL, the OP is right that it's a CPython implementation problem, not a structural one that has to exist in Python. There is some exciting work that has been done in the direction of fixing this at the implementation level, and we are excited about seeing how far we can take it. Stay tuned...


Right. If you're spending a lot of time in the CPython interpreter, you probably don't care too much about performance anyway. The CPython interpreter is a hell of a lot slower than using Numba or Cython (which can operate without the GIL) or PyPy (which is an completely separate codebase from CPython, although it still has a GIL in its typical incarnation), or many alternative languages. As soon as you're out of the interpreter, the GIL ceases to be an issue (e.g. NumPy releases the GIL while performing array operations).


I've enjoyed the Python networking libraries I've used. A lot. Could you be more specific?

Also, I'm not quite clear on your issue with packaging. Can you be more specific? Compared to C, I find importing existing code to be nearly heavenly.

Regarding backwards compatibility, maybe it's an issue, maybe it's not. On the one hand it's frustrating and some people will make it an issue. On the other hand, I can't help but think of cases like Apple, where they were the first to abandon modems, serial ports, diskette drives, and CD drives—but probably made the right decision.


Hes not talking about Python imports, hes talking about pulling Python packages through pip or one of the dozen utilities to do so from pypi. Though I do feel pip has been standardized on enough to solve that problem.


There are two options for Python networking: Twisted, and things that are not Twisted. Guess which one most people pick.

What do you use to package? distutils, setuptools, distribute, distutils2, pkgutils...? How to choose?


start with distutils which is in stdlib and if it doesn't provide enough features; use setuptools (distribute and setuptools are merged) that extends distutils functionality.


Packaging is fine unless you're ignoring the current state of tooling. Not sure what you mean by networking unless you're talking about gevent/twisted/tornado/diesel which are really proxies for the problem of concurrency (GIL) rather than expressly about networking.


Packaging is starting to come together, but still has a ways to go. The Setuptools-Distribute merge was just announced a couple months ago, we still have 6 ways of doing things that are mostly incompatible. Packaging is mostly solved if you're on Linux, sort of solved if you're on OS X, and a horrible mess if you're on Windows. Perhaps if you can live in the world of pure Python it's good, but once you have C-based dependencies, it quickly becomes a mess. Compared to something like CRAN and the R community, it's clear Python could be improved in this area.


> bikeshedding

Not just network libraries; How many cli modules are there now?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: