Hacker News new | past | comments | ask | show | jobs | submit login

Python with JIT is faster than Python without JIT.

Having a Python with JIT, in many cases it will be fast enough for most cases.

Data science running CUDA workloads isn't the only use case for Python.




I think Python without a JIT in many cases is already fast enough for most cases.

I don't do data science.


Sure, for UNIX scripting, for everything else it is plainfully slow.

I know Python since version 1.6, and is my scripting language in UNIX like environments, during my time at CERN, I was one of the CMT build infrastructure build engineer on the ATLAS team.

It was never been the language I would reach for when not doing OS scripting, and usually when a GNU/Linux GUI application happens to be slow as mollasses, it has been written in Python.


My teams deploy Python web APIs and yes, it is slow compared to other languages and runtimes.

But on the whole, machines are cheaper than other engineering approaches to scaling.

For us, and many others, fast enough is fast enough.


There's a lot of Django going on in the world.

shrug. If we're talking personal experience, I've been using Python since 1.4. It's been my primary development language since the late 1990s, with of course speed critical portions in C or C++ when needed - and I know a lot of people who also primarily develop in Python.

And there's a bunch of Python development at CERN for tasks other than OS scripting. ("The ease of use and a very low learning curve makes Python a perfect programming language for many physicists and other people without the computer science background. CERN does not only produce large amounts of data. The interesting bits of data have to be stored, analyzed, shared and published. Work of many scientists across various research facilities around the world has to be synchronized. This is the area where Python flourishes" - https://cds.cern.ch/record/2274794)

I simply don't see how a Python JIT is going to make that much of a difference. We already have PyPy for those needing pure Python performance, and Numba for certain types of numeric needs.

PyPy's experience shows we'll not be expecting a 5x boost any time soon from this new JIT framework, while C/C++/Fortran/Rust are significantly faster.


> There's a lot of Django going on in the world.

Unfortunely.

> And there's a bunch of Python development at CERN for tasks other than OS scripting

Of course there is, CMT was a build tool, not OS scripting.

No need to give me CERN links to me to show me Python bindings to ROOT, or Jupyter notebooks.

> PyPy's experience shows we'll not be expecting a 5x boost any time soon from this new JIT framework, while C/C++/Fortran/Rust are significantly faster.

I really don't get the attitude that if it doesn't 100% fix all the world problems, then it isn't worth it.


The link wasn't for you - the link was for other HN users who might look at your mention of your use at CERN and mistakenly assume it was a more widespread viewpoint there.

> I really don't get the attitude that if it doesn't 100% fix all the world problems, then it isn't worth it.

Then it's a good thing I'm not making that argument, but rather that "Having a Python with JIT, in many cases it will be fast enough for most cases." has very little information content, because Python without a JIT already meets the consequent.


A Python web service my team maintains, running at a higher request rate and with lower CPU and RAM requirements than most of the Java services I see around us, would like a word with you.


I guess those Java developers really aren't.


How many requests per second are we talking, ballpark, and what's the workload?


~5k requests/second for the Python service, we tend to go for small instances for redundancy so that's across a few dozen nodes. The workload comparison is unfair to the Java service, if I'm honest :). But we're running Python on single vCPU containers with 2G RAM, and the Java service instances are a lot larger than that.

Flask, gunicorn, low single digit millisecond latency. Definitely optimised for latency over throughput, but not so much that we've replatformed it onto something that's actually designed for low latency :P. Callers all cache heavily with a fairly high hit ratio for interactive callers and a relatively low hit ratio for batch callers.


I really wouldn't mind Python being faster than it is and I really didn't mind at all getting an practically free ~30% performance increase just by updating to 3.11. There's tons of applications which just passively benefit from these optimizations. Sure, you might argue "but you shouldn't have written that parser or that UI handling a couple thousand items in Python" but lots of people do and did just that.


I wouldn't mind either.

Do you agree with me that Python is already fast enough for most cases, even without a JIT?

If not, how would a 30% boost improve things enough to change the balance?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: