Hacker News new | past | comments | ask | show | jobs | submit login

Very bizarre there is no discussion of numba here, which has been around and used widely for many years, achieves faster speedups than this, and also emits an LLVM IR that is likely a much better starting point for developing a “universal” scientific computing IR than doing yet another thing that further complicates it with fairly needless involvement of Rust.

https://numba.pydata.org/




I'm one of the developers of Weld -- Numba is indeed very cool and is a great way to compile numerical Python code. Weld performs some additional optimizations specific to data science that Numba doesn't really target right now (e.g., fusing parallel loops across independently written functions, parallelizing hash table operations, etc.). We're also working on adding the ability to call Python functions from within Weld, which will allow a data science program expressed in Weld to call out to other optimized functions (e.g., ones compiled by Numba). We additionally have a system called split annotations under development that can schedule chains of such optimized functions in a more efficient way without an IR, by keeping datasets processed by successive function calls in the CPU caches (check it out here: https://github.com/weld-project/split-annotations).

Overall, we think that the accelerating the kinds of data science apps Weld and Numba target will not only involve tricks such as compilation that make user-defined code faster, but also systems that can just schedule and call code that people have already hand-optimized in a more efficient and transparent way (e.g., by pipelining data).


Although to be fair, there is no reason why numba couldn't gain those capabilities, it just hasn't been a focus of the project. It should be possible to build a lightweight modular staging system in python/numba similar to Scala's (https://scala-lms.github.io/) or Lua's (http://terralang.org/).


Did you read the article? If you know of Numba works, you know it can't just pick up different functions from sklearn and scipy and do interprocedural optimization (IPO). For Numba to do that, it'd need all functions involved to be written in Numba @jit style, whereas Weld would work directly on the pre-existing functions.

Rust is just a IPO driver of sorts here.

I'm not critizing Numba btw, I use it regularly, but your comment seems a little off here, considering that Weld has different goal in mind.


I don’t agree that the purpose of the article is misaligned from my criticism. This is based on reading the article.


Hi, I am the interviewer. I think I saw numba once but forgot about it. I will check it and probably ask to interview them too. We are preparing interviews about RAPIDS and other similar projects too.


Numba is the option used in Lectures in Quantitative Economics with Python, posted and highly upvoted here yesterday: https://news.ycombinator.com/item?id=21022620.


Numba is amazing. +1 for numba


But does it really speed up numerical libraries like numpy and pandas? I thought it only works on pure python code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: