Hacker News new | past | comments | ask | show | jobs | submit login

Complexity matters. Python originally had success because it was easy to use. It was one of the first languages I learned after C++ to just get stuff done.

People are not leaving Python for Haskell, Rust, Swift and Scala because those languages are too complex to deal with.

However what we see time and again is that languages that are easy to use and which offer real advantages gain adoption. Look at the rising popularity of Go as a good example.

This is why I belive Julia has a real chance against Python. It hits all the right checkboxes: It is easy to learn, simple tools, while also being highly productive and giving great performance.

Sure Python is not suddenly going to get knocked off the crown. But the fact is that Julia has far more growth potential. Packages are built faster as there is no need for C/C++. Packages are combined way easier due to multiple dispatch and lack of C/C++ dependency. This is a hard one to explain in a short text but Julia has a unique ability in how packages can easily combine. Hence with a couple of Julia packages you get squeeze out the same functionality as a dozen Python packages.




I am a researcher at an algo trading shop. We are moving our core libraries from rust+python to Julia.

It really is amazingly powerful and has great interop with python (eg seamless zero copy array sharing).

In the past one had to muck about with cython/c/numpy api to speed things up, now one can just write the functionality in Julia and make it available in both ecosystems.

Python will likely remain the premier language to do research in, but more and more work can move to Julia which is much better at anything not immediately vectorizable.


Great news for me! I actually do Julia training. My colleagues do Python training. But we don't seem to be quit at the critical mass yet to gain a lot of training requests for Julia. But I am an optimist. There seems to be a shift going on. 2020 could be a breakout year for Julia.

A lot of my job the way I see it at the moment is to try to learn about people's experience switching to Julia to help explain to people why it would benefit them.

How did you guy actually end up switching to Julia? Was it a careful analysis ordered from the top, or was there more like some Julia evangelists who bugged people until they tried it an realized it was actually quite useful?

I used to work in the oil industry, and tried to convince people to try Julia. It would have been a huge advantage over our Python interface in terms of performance but it was a very hard sell. People are conservative. They are very reluctant to try new things. So I am always curious how other people pull off making a change happen.


Julia was brought in largely organically. We often need performant non-vectorizable code. Numba is actually a real pain to work with if the code is not explicitly numeric only. It works well for optimizing a hot loop here and there, but anything more complex becomes an exercise in frustration.

So we started experimenting with Julia and it was literally like the best of both worlds. Compile times can still be a struggle sometimes, but we are happy to eat a few secs startup cost when doing research, as most of the time we'd just have an ipython/julia repl going and keep the session open for hours/days.

Most people (me included) weren't prepared to pay the mental tax of writing Rust on potentially throwaway experimental code, so as we worked we realized that the core Rust libs can be easily replaced with much simpler Julia code without any loss in performance.

While the PyO3 library is awesome, it was actually quite difficult to reconcile the safety of Rust with the dynamism of Python and the friction can be really felt in any code dealing with the interop. This is mainly on the Rust side, as it ended up being littered with casting and type checks when communicating with Python. Rust being compiled AOT, a lot of the generics power goes out the window in the interop purely because it is impossible to know at compile time the type of objects coming from python. This has negative implications for performance, because the rust code can't be neatly specialized either, but has to resort to dynamic dispatch and trait objects. Julia wins here due to the JIT compiler that auto-specializes code at run time when the types are known.


I really appreciate folks like the parent here who take the time to highlight specific details like these that arose from their experience with different languages and paradigms.

These are the details that matter.


> This is why I belive Julia has a real chance against Python. It hits all the right checkboxes: It is easy to learn, simple tools, while also being highly productive and giving great performance.

The only thing that makes me anxious about Julia is Dan Luu's complaint that testing is not a big part of the culture. Generally, for code folks are going to rely on to be correct, I want some mechanism for assuring correctness. Types, tests, proofs, whatever, but I want to see them.

Admittedly this is true of a lot of scientific computing, so I may be misplacing my nervousness.


In 2016 yes. But this has been very much addressed. At this point, not only is Julia well-tested, but it's so well-tested that it has to carry around its own patched LLVM and its own math library in order for its full tests to pass, because for example it requires things like sin to be correct to 1ulp which isn't actually true in a lot of implementations! Then when you go to packages, there's a huge culture of testing. Compare OrdinaryDiffEq's (just the ODE part of DifferentialEquations.jl) test suite:

https://travis-ci.org/JuliaDiffEq/OrdinaryDiffEq.jl

which does convergence tests on every algorithm, along with regression tests on every interpolation and analytical value unit tests on each feature, etc. Meanwhile, scipy integrate has some of its own algorithms, but only a few regression tests, and most tests just check that things run: https://github.com/scipy/scipy/tree/master/scipy/integrate/t... . Same with other libraries like torchdiffeq: https://github.com/rtqichen/torchdiffeq/tree/master/tests . So Julia's testing culture is now strict enough that things that are commonly accepted in other programming languages would be a rejected PR due to lack of testing! And for good reason: these tests catch what would be "performance regressions" (i.e. regressions so the algorithm doesn't hit its theoretical properties) all the time!


Thankyou, this is very heartening.


My counterpoint to this would be that everything you need for proper testing exists already in Julia.

Also I would argue that Julia is a much easier language to write correct code in than Python. Python suffers the general OOP problem where it becomes hard to isolate problems due to all the mutations (imperative coding).

While Julia is not a purely functional language, you program in a far more functional style in Julia because it supports it much better.

My own experience when working with both languages suggest that I am able to write more pure functions in Julia than Python. I am able to crank out small isolated functions which I quickly test in my REPL environment as I go.

It is a different way of working. My Python friends write more formalized tests than me as they code. Julia is perhaps more in the LISP tradition. You are continuously writing and testing as you go in the REPL. Some of those test make their way into my test suite but not all.

Because we are generally not writing Server code in Julia testing is less important. If the program crashes so what? What is important is correctness of numerical output. Yes you need tests for that.

But I would speculate that tests needed to verify correctness of numerical calculations are less than tests needed to secure uptime for some server service.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: