Hacker News new | past | comments | ask | show | jobs | submit login

codedivine has a point and it's even admitted in this posting: "we expect the initial performance to be abysmally bad (maybe 10x slower); however, with successive improvements to the locking mechanism, to the global program transformation inserting the locks, to the garbage collector (GC), and to the Just-in-Time (JIT) compiler, we believe that it should be possible to get a roughly reasonable performance (up to maybe 2x slower)."

It will likely be difficult to beat expertly written code using explicit locks. But most people aren't experts in concurrency and will either get it wrong or have slow implementations. And if transactional memory catches on, we may even see some hardware assistance in future CPUs.

(S)TM is definitely worth exploring more and even a 2x slower implementation (as envisioned by the PyPy team) could cover most concurrency needs, which will make it a success in most people's eyes.




Do they mean 2x slower than CPython or 2x slower than PyPy? If they wind up with something 2x slower than current PyPy (which is much faster than CPython in many cases), that'll still be a version of Python much faster than CPython that doesn't have the limitations of the GIL and can thread across cores, and that'll be a huge win.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: