Hacker News new | past | comments | ask | show | jobs | submit login

I think you're overestimating how many Python developers are willing to write and compile C. Debugging gets more difficult across two languages. Shipping binaries with a Python package correctly can be a hassle too (what about -march=native performance).



That is not too hard if one uses Pybind11 C++ library, which allows creating native code Python modules (shared libraries) in a simple and declarative manner. Other library that can be used with Pybind11 is Eigen for linear algebra that can perform loop fusion in numerical computation, for instance, it can add multiple vectors of same size in a single loop.


Numba compiles Python to LLVM as a backend, why is this even a discussion?


Because numba (and jax), although being wonderful pieces of engineering, work on a fairly limited subset of python, do not permit the same level of introspection as python, and do not play nice with other python libraries. Julia does not make that sacrifice.


For Jax I believe this is false.

Jax is composable. In fact it’s a core design goal. Jax arrays implement the Numpy API. I routinely drop Jax arrays into other python libraries designed for Numpy. It works quite well. It’s not effortless 100% of the time but no library interop is (including Julia multiple dispatch).

I can introspect Jax. Until you wrap your function with jit(foo) it’s as introspectable as any other Python code, at least if I’m understanding what you mean by introspection.

Jax has implemented most of the Numpy functions, certainly most of the ones anyone needs to use on a regular basis. I rarely find anything missing. And if it is, I can write it myself, in python, and have it work seamlessly with the rest of Jax (autodiff, jit, etc)


Jax is awesome. But supporting most of numpy isn't enough. Because numpy isn't composable. You want to add banded-block-banded matrices to numpy? Then you need to fork numpy (or in this case fork jax); this is a package in julia and it works with everything. You want add names to your array dimentions like PyTorch recently did, then like PyTorch you need to fork numpy; again this is package in julia. You want to do both? You have to merge those two forks into each other. In julia this isn't even a package this is just using the aformentioned two packages.

You want to work with Units or track Measurment error (or both?). Basically same story. Except better in some ways worse in others. Better because you don't have to fork numpy, it is extensible enough to allow that. Packages exist that use that etendability for exactly that. Worse because those are scalar types, why are you even having to write code to deal with array support at all. Agian 2 julia packages and they don't even mention arrays internally.

The problem's not Jax. The problem is numpy. Or rather the problem is this level of composability is really hard most of the time in most languages (including the python + C combo. Especially so even).

Its true that this is not always trivial 100$% of the time with julia's multiple dispatch. but it is truer there than anywhere else i have seen.


> do not play nice with other python libraries.

Julia hasn't been around long enough to build ecosystem with multiple commerical giants building competing products.

> Julia does not make that sacrifice.

Sure, Julia sacrifices your sanity on the alter of stack traces, method resolution, time to first plot, among others.

> fairly limited subset of python

Fantastic comment, always brought up, except that this same subset of Python (really how to use CPU efficiently) is about the same that anyone writing about Julia performance is preaching.


I think a really good example of how Julian tools are composable and work with the whole of the language, while that is not the case with Pythonic tools, is autodifferentiation. The many different julian autodiff libraries have little trouble performing the autodiff through special functions, weird data structures, and 3rd party libraries. In python, none of the autodiff tools play nice with scipy or with mpmath or with sympy (or qutip, which is something I need). This is not about commercial competitors either - all of the libraries I mentioned are non-commercial.

Julia provides the interoperability between such libraries, including custom datatypes, frequently with fast zero-cost abstractions.


I agree but think that Julia had the benefit of watching Python and MATLAB suck at auto diff and do better as a community. That said PyTorch and Jax are great.

> fast zero-cost abstraction

Until you get a compiler error and the stack trace takes up a few screens of text. Who maintains that abstraction by the way? Nothing is free.


>Fantastic comment, always brought up, except that this same >subset of Python (really how to use CPU efficiently) is ?about the same that anyone writing about Julia performance is >preaching.

This is completely and demonstrably false. Julia allows fast programming with macros, multiple dispatch, abstract types and more.


All those features have a cost that, in a performance oriented setting, require restricted the code to a subset of Julia that matches what the hardware is fast at.


Where are you getting this from? None of those features have a performance cost, if used according to some simple guidelines.

In fact, multiple dispatch is essential to get good performance, abstract types have a cost if explicitly used inside struct definitions, but that's why you use parametric types, and abstract types do not have a cost in function signatures.

Macros are expanded at compile-time, and at least have no runtime cost, in fact they are often used for improved performance.

I think there's some fundamental misunderstanding going on here, but I'm not sure what it is. Or do you mean to say that if it is possible to write slow code in a language, then only a subset of that language has good performance? If that is the case, I don't know how to respond.


No, that's not true. I specifically mentioned features that can easily be made zero cost on both CPU and GPU targets.

In fact, those aren't special features and there is no other subset. Those are the core abstractions on which everything in Julia, down to primitive types and wrappers for llvm intrinsics are built. Without them you wouldn't have Julia.

Julia's GPU ecosystem wouldn't be where it is with just one or two people maintaining it without being able to reuse those features to plug into existing machinery.


I beg your pardon. What parts of Julia are not supported by its jit compiler?


Numba isn't as slam dunk of a solution as it seems, at least when your code needs to run on something besides x86. I recently witnessed someone getting numba based code running on an ARM based system, and they fought llvm package and performance issues for weeks. The code in question was generally working and seemed pretty good on x86, but took up multiple CPU cores at 100% on the ARM platform. Doing a straight forward port of the algorithm to C++ and wrapping it with pybind11 resulted in the same algorithm using 14% of one CPU core. Perhaps the numba implementation could have been made to work eventually, but it seemed pretty difficult to debug due to all the layers involved.


A subset of Python for one, and debugging stays awkward with gdb.

Makes me think of this little pun: https://github.com/FluxML/Flux.jl/blob/master/src/utils.jl#L...


The subset of Python compatible with Julia's whole performance story (mostly, structs are the missing piece), yes sure.


You mean the type system and multiple dispatch? I guess those are fairly minor points, yes.


One performance feature I find really nice in Julia, is that you can write your own function (cost function, function to integrate, whatever) that you can pass into a solver/integrator/etc. Then it will be inlined and compiled into that external library.

Does this work with numba? Are your jit-compiled functions compiled into external modules?


Even if structs worked, they wouldn't be part of a good type system, don't work with abstract interfaces/inheritance,traits, multiple dispatch, fast union types, zero allocation immutables or any myriad of other things




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: