Hacker News new | past | comments | ask | show | jobs | submit login
Julia 1.6 addresses latency issues (lwn.net)
177 points by leephillips on May 25, 2021 | hide | past | favorite | 157 comments



So I used to be a big proponent of Julia, and in some ways, I still am. But I very recently tried to write a high performance production system in it, and was sorely disappointed. The tooling is just so buggy and it's clear that the community isn't really interested in using it for anything besides modeling/research in a Jupyter notebook.

Things that kind of suck about using Julia for production:

1. Never could get Revise to work, had to restart my REPL everytime I changed any code. Even though Julia 1.6 was a lot faster than 1.5, it still took too long.

2. Couldn't find a static type checker that actually worked (I tried JET and StaticLint). I feel like static typing is just so important for a production system, but of course the community isn't really interested because of the research focus.

3. Editor tooling. The LSP server absolutely sucks. I first tried using it with emacs (both lsp-mode and eglot mode), but it would crash constantly. I think switched to VSCode (much to my chagrin), and that worked marginally better though still very poorly. It was clear that the LSP server had no idea what was going on in my macro heavy code. It couldn't jump to definitions or usages much of the time. It could never correctly determine whether a variable was unused or misspelled either. Coupled with the lack of static type checking, this was extremely frustrating.

4. Never felt like the community could answer any of my questions. If you have some research or stats question, they were great, but anything else, forget about it.

Will all of that being said, I do still use Julia for research and I find it works really well. The language is very nicely designed.

All and all, I decided to ditch Julia and decided to go with Rust (after some consideration of OCaml, but unfortunately the multi-core story still isn't there yet) and am a lot happier.


I agree with most of what you said and share your frustration with the tooling of Julia. However, I am not sure how well I would expect a static checker to work for Julia. Julia after all is a dynamical language, and while you may disagree with that choice, the dynamical typing permeates the language. There is no pattern matching and no language features to manipulate types, for example. Julia's AST and IR share a lot in basic data structure thanks in large part to Julia's dynamical nature. Since adding static typing to Julia at the current stage of its development would be almost an upheaval, I am not counting on it.

This means there is an upper bound to how good the editing and refactoring tooling will be. I suspect the best will be a little below Pythong's tooling, because Python's class helps in disambiguating methods.

One feature of Julia I think could be improved upon is the `include` statement to import code, similar to C's `include` macro directive. This makes finding functions within a package an absolute nightmare. I think this also restricts LSP's ability to find definition and usage since this tends to make Julia packages one big source file as far as the compiler is concerned. I saw that in Julia's compiler code, the core developers include C source files, not header files mind you, in other C source files, so perhaps the feature stems from their personal preference. Python's file-module may be too restricting, but Julia's including is too loose.


Your concerns are absolutely valid. Julia tooling is lacking, and tooling quality can be more important than language quality. There is a reason why Java is so popular.

I feel that most of your points can be addressed if the compiler's "abstract interpreter" (static analysis) procedures were exposed and reliable/stable. Some packages (e.g. Mjolnir.jl [0]) attempt to re-implement it. Others (e.g. JET.jl [1]) try to hook into Julia's undocumented/private compiler internals (CoreCompiler). Rightfully so, only the most intrepid are willing to do this.

[0]: https://juliapackages.com/p/mjolnir

[1]: https://juliapackages.com/p/JET


It's sad to see that the Julia ecosystem does not address these issues at all. Maybe Julia people are in some kind of bubble of people who like Julia and generalize that to all potential users and contributors. The true process is probably that people with other workflows (eg. non-REPL/Notebook) and past experiences (esp. more "general purpose" languages) just give up and are never heard of again.

I've ranted about very similar things for some time now, and had some fruitful discussions, and I even offered to help to try to scratch some of the itches me and others are suffering from. I've now mostly given up too.

It's a bit sad for me. I think Julia has many things right and the core concepts are potentially revolutionary. But for me it seems that Julia ecosystem is a bit too jealous of their discovery to let it free. Weird.


I'm a Julia user who has also given up as a Julia community member. The bubble is real. I found too often when I wanted to do something outside of that bubble and asked for help, I got unhelpful advice which amounted to "why do you want do something outside of our bubble?"


Can you give some examples?


One example. Not a very important one, but illustrative.

https://discourse.julialang.org/t/why-no-base-iterators-map/...

The "bubble effect" here was assumption that everyone has internet access and dependencies to external ( community maintained ) packages are zero cost.

Happily 2 years later this was resolved to my satisfaction.


Actually not probably that weird. May be a sort of impedance mismatch between academia and software scenes. It's very common in academic circles to be quite jealous of ideas, especially good ones. The academic game is often to get your name associated with some good idea, and this is what carries a researcher in their career. I find this rather counterproductive and try to avoid it myself.

I'm a researcher but I have a background in software development, and especially open source. Academia is way behind software in openness, which is rather bizarre as the institutional role of academia is usually seen as providing new ideas and challenging old thinking so the community can prosper. There's some kind of a bug in the current academic culture that causes it to freeze up when it should fulfill its promise to share its findings with the world.

Probably a good way to debug this thing is to follow the money and why on earth do we have these weird hats and robes?


I agree. The Julia community is defensive. When you criticize Julia, often their first reaction is to say you are doing something wrong or to convince you Julia is perfect for everything. I was enthusiastic about Julia but not anymore.


It's not the first community with that attitude that I've witnessed. I've first seen this with Ember.JS (the Javascript framework everybody has forgotten about), then later with the Swift programming language. It could be interesting to investigate why things like this happen and how a community needs to be managed in order not to run into the same problem.


I'm a moderator at Julia's discourse site. We've seen this happen and we're working on improving this, but I'm also (biased and) sympathetic to the "Julia community" writ large.

Julia's optimal sorts of workflows are different than many expect — especially for folks coming from static languages. Julia is a dynamic language. But also, Julia isn't Python. There are ways in which your workflows from other languages just aren't optimal in Julia. So when someone comes in "hot" and rants about how their workflow isn't working out how they expect, others jump in and — yes, defensively at times — point out alternatives that work for them.

I think some of this tension comes from an expectation mismatch. Some of the changes noted in these comments about how Julia is presented on julialang.org were made specifically due to this sort of expectation mismatch.

This doesn't mean that nobody in the community cares about improving workflows — in fact I know that's not true. But if you want to use Julia today, there are definitely some happy paths that we should guide folks towards.


> Julia's optimal sorts of workflows are different than many expect — especially for folks coming from static languages. Julia is a dynamic language. But also, Julia isn't Python. There are ways in which your workflows from other languages just aren't optimal in Julia. So when someone comes in "hot" and rants about how their workflow isn't working out how they expect, others jump in and — yes, defensively at times — point out alternatives that work for them.

And that's exactly what I've seen over and over again, in different communities, such as the ones I've mentioned, this attitude of "you're holding it wrong".

No language or tool can be everything to everybody, that's perfectly reasonable. If the goal of Julia is different from that of people that want to run applications in production—or at least those concerns aren't the most important ones—fair enough. But all too often (and I'm not necessarily talking about Julia here; I haven't spent enough time on the Julia forums, I'm mostly referring to some of the other examples I've cited) real concerns are just dismissed out of hand.

I also don't believe that enforcing certain very specific workflows at the exclusion of any others is necessarily a great way to become a language that lots of people want to use. People come from different backgrounds and have different preferences and needs (sometimes for very good reasons), successful languages like Python recognise this and let you work with a wide range of tools.

Finally, I'm just very skeptical about a certain kind of NIH syndrome that I've seen in some of these communities, something like "encapsulation? well, all these other languages might need it, but we actually don't, because XYZ" or similar things.


>And that's exactly what I've seen over and over again, in different communities, such as the ones I've mentioned, this attitude of "you're holding it wrong".

You're really reading this wrong. The issue is moreso about personal knowledge. I see this on the Discourse sometimes where someone comes blazing in saying "I want to use PyCharm for Julia and I run into ...", "Special Lisp-only IDE has a Julia plugin and it fails because ...", or "I have this special terminal setup and I no 0 latency and ...". The answer of course is, if you need help, use VS Code or Juno/Atom. Those have a larger userbase which that has fixed exactly those issues and has a helpful dev community on those site which will get you up and running. That's not "you're holding it wrong", that's people trying to help you into an area where they know how to help you. You can shrug it all off and blame it on them, but at the end of the day, people are sharing what they know works, a reliable solution they built for exactly this problem, and then are getting flamed on the internet for having "NIH syndrome" for not fixing some random IDE plugin they've never used. I just don't see how you can expect every person to be an expert on every possible workflow you can imagine and start throwing flame when someone tries to lead you to water.

In the Julia community in particular you tend to have people who are more experts in their scientific domains. I am in that category: I can help you in anything ML, high performance computing, differential equations, etc. but I use Windows and the Juno IDE. If you open an issue about something weird happening on emacs, I wouldn't even know how to get it installed, sorry, that's not my expertise. The Julia community has a lot more people in probabilistic programming than it has Javascript devs, and so people share how they know how to fix your problem "can you run it in VS Code and show the profile?", not "here's how to run some daemon server thing" etc. And that's perfectly fine, it has its advantages and disadvantages.


I mean, you're totally free to blaze your own development workflow trail. Even better if you can help in the ongoing clearing and paving of the less-used trails. But when someone complains that a less-used trail is rocky, it's gonna be hard for everyone on the golden brick road to avoid suggesting taking their route... especially if it's leading to the same destination.

And yes, that destination does include production.


Yes you are, but then you have to wait a minute or so after any single change to any file if you do a script workflow. If you do a Revise workflow you end up doing mostly the same because it just breaks all the time and you have to reload the session. Or you can use some bizarre hack like DaemonMode.jl that also just breaks all the time.

Julia is hard to debug as is (and this is fully acceptable to me due to the architecture) and these hacks on hacks make it just impossible.

Julia suffers from the same "fancy calculator" approach than e.g. R and Matlab, and like those, they refuse to take note of the decades of experience in software development that this just leads to a total mess if you have anything more complicated than a one-off analysis. It's like insisting on editing files with ed because that's how it's always been done.


And this is precisely why this dynamic exists — because there are workflows with far less friction than what you've experienced. I and others successfully use them every day for things far more complicated than one-off analyses.

I think your view of Julia as a "fancy calculator" may be part and parcel to the difficulty. I do use it as a fancy calculator, but I also use local packages all the time — and that's where you'll find the best success in both tooling and code structure for sustainable (and modern) development. We need to do a better job of helping folks more effectively develop and work with Julia before it becomes frustrating.


Friction in general, or friction specific to Julia?

My view is that Julia (and R and Matlab etc) ecosystem's view is a "fancy calculator". It's hard to integrate to anything else, it almost actively tries to make e.g. unix-type workflow difficult (not much CLI, everything done in REPL, e.g. package management). Not as bad as e.g. R where its almost impossible to just run a file of code instead of doing a "workflow session".

Perhaps there is some disconnect in how different parties of the discussion see the whole thing. For me this sounds a lot like "you're holding it wrong". What background are you coming from and what languages/programming environments you are familiar with? I get the impression that you don't have much general purpose programming background and somehow tend to assume that people are using "wrong workflows" due to being just stupid or something. But this is how it comes off to me, I think you mean something else, but I don't know what that is.

My approach is: I want files with code and I want to run some of those with some kind of entry point. And preferably organize it to files or something like that. This is the usual setup for general purpose programming languages. In general purpose programming you rarely tend to stick solely to one language, any larger project is usually a mishmash of different stuff, e.g. glue code in python, tight loops in C, maybe some bash thrown around to do some more trivial chores.

This is the overall structure of practically all general purpose programming languages. This requires interfacing with these languages (often done in bash or something in unixy systems at least) and tools that work regardless of language (e.g. version control, grepping, Make etc). Julia (and R and Matlab etc) just plain refuse to even consider this as a valid approach. If you mention it somewhere, you tend to get some bizarre instant putdowns that provide no real rationale, i.e. "you're just holding it wrong".

In programming I tend to think the whole computer and its OS and so on as the programming tool. My impression is that for Julia folks Julia is something totally separate from anything else in the computer. But I don't understand why. Is it that its felt that changing syntax on the fly is too cognitively demanding or something? This is something that one gets used to very fast. And also tends to ease the IMHO weird hung-ups on syntax. In fact, I think Julia has made some (IMHO misguided) syntactic decisions because they just want to do something differently from Python just for the sake of being different. I find this totally senseless.

Us CLI folks are not even asking for much. Just a way to call a program with arguments and so that you don't have to do compile identical code every time you do something. That's it. Is that unreasonable? It wouldn't take anything away from people using REPLs and Notebooks. We don't want to change the language, the language is fine and that's why we'd like to use it but just cant because these paper cuts!

I've expanded on my rationale for my workflow and how Julia makes it really difficult e.g. here, where I admittedly started with too strong wordings out of frustration: https://news.ycombinator.com/item?id=26134970


>Perhaps there is some disconnect in how different parties of the discussion see the whole thing. For me this sounds a lot like "you're holding it wrong". What background are you coming from and what languages/programming environments you are familiar with? I get the impression that you don't have much general purpose programming background and somehow tend to assume that people are using "wrong workflows" due to being just stupid or something. But this is how it comes off to me, I think you mean something else, but I don't know what that is.

No, you're not stupid. You've correctly identified current flaws in the tools, and the community tries to lead you to the tools that are known to work well in spite of the issues. We need to work a little bit more on the other tools to make them as good, but large parts of the community are just not the right people to work on that problem. It's not a design problem, it's fixable, but you just have more people who know how to make new differential equation solvers than you have people who can compile a binary, so the tooling ecosystem moves at a different pace from the scientific packages. This is changing with the help of some commercialization aspects though, but it is something we have to be cognizant of.


IMO, if Julia lacks the necessary tooling, don't over-advertise it. When I say the lack of static compilation is a design problem, I mean such a critical feature should be in v1.0. Without it, many common tasks in other languages become difficult in Julia. For language adoption, you often only have one chance. If you push someone away, it is much harder to win them back.


IMO, you are overestimating how big a deal static compilation is. Yes, there are usecases where it's important, but acting like a language shouldn't tag version 1.0 without static compilation just sounds ignorant and self centred.

The world is bigger than you.


This kind of comment just enforces the defensive perception towards the community. He clearly stated that it was just his opinion. Why not just point out that "common tasks" always refers to each own's bubble?

There is no need to insult people around.


He says it's his opinion, but then states that the language shouldn't have tagged version 1.0 without it, and that people should not advertise julia because it's such a important feature.

I totally get that there are workflows that Julia is inappropriate for, but when someone says that a tool shouldn't be released because it doesn't target their workflow, I really don't know what to call that other than blindingly self centred.


Why is static compilation such a major feature for large-scale differential equation solving? That's what I advertise Julia as good for. Probabilistic programming as well. Etc.


Static compilation is not needed for numeric computing which Julia is great at. However, Julia is advertised as a general purpose language, which IMO is obvious in your "What is Julia?" section. You pose Julia as a mighty language for everything and your supporters think this way, too. I have had conversations with multiple Julia supporters who thought Julia ought to replace python and even C. When people like jampekka pointed out Julia lacks necessary features to make this happen, these supporters got defensive and made absurd claims beyond their domain of knowledge.


Ah yes, Python, the famously statically compiled general-purpose programming language :p

More seriously though, just because some language may be considered "general purpose" doesn't mean it's the best language for everything, or should replace other languages outside of whichever particular applications where it excels.


>You pose Julia as a mighty language for everything

No I don't. It needs some work in some places.


> Us CLI folks are not even asking for much. Just a way to call a program with arguments and so that you don't have to do compile identical code every time you do something. That's it. Is that unreasonable?

Look, everyone wants static compilation, and it's almost certainly going to happen eventually. It's not unreasonable at all to want that. The only thing that would be unreasonable is to (apparently repeatedly!) imply or assert that this lack of static compilation (and/or better caching) is due to some bizarre perversion on the part of the community.

If you want it to happen faster, then stop complaining about it on HN, get familiar with the current attempts at true static compilation in e.g. https://github.com/tshort/StaticCompiler.jl/pull/46 and start contributing yourself.


I don't especially want a static compiler. I'm fairly happy with Python workflow even though it's totally dynamic and interpreted, but it's dog slow on numerical stuff and the design is showing its age.

I like the "AOT-JIT" approach of Julia, and I think a separate compilation step is quite a stupid idea in the first place. I see it as some historical relic from computers with extremely limited memory (by current standards) and outdated business models that want obfuscated binaries.

My main gripe is just that I have to do it every time I run anything if I'm not using a REPL/Notebook, which drive me insane with their inconsistent and opaque global state. This could be solved mostly by caching the compilation results and compiling new stuff incrementally. But I'm starting to lose any hope that this is gonna actually happen.

Some lesser stuff follows from lack of this, but they should be relatively easily fixed.


Couldn't agree more. What you said is a design flaw that the Julia community keeps denying. The Julia community also behaves defensively when it comes to performance. They refuse to accept that Julia doesn't compete with C/Rust on many non-numeric tasks in the real world. In addition, the Julia community has its own circle, its own style to write code. They will trash your code even if it is efficient and logically correct. I once compared implementations in different languages and found Julia is slower than others. Several of them jumped at me, criticized my coding style and blamed me for their fault. But in the end they couldn't improve the performance. At the Julia discourse, I was alone in the argument. That was a really bad experience.


Can you please post a pointer to the thread in question?


Sorry, I would like to keep my online anonymity.


If you won't post a link publicly, could you at least contact the julia community stewards [1]? If there are community dynamics that are pushing people away, the stewards need to be told.

For what it's worth, I've seen some nasty (though well meaning) pile-ons in the julia discourse, but for the most part it's a pretty nice place. I find though that the Zulip [2] and Slack communities are generally more friendly and relaxes places though.

[1] https://julialang.org/community/stewards/

[2] https://julialang.zulipchat.com/


I use Julia as my main driver these days and have shared some of this experience but not all

1) I use VScode and have had 0 problems with Revise. It Just Works when using the Julia extension + built in REPL. I actually prefer the Julia environment to Python in VScode, I have way fewer problems when doing a Notebook-like workflow where I’m writing library code at the same time

2) Agreed, I also wish there was a better story here. I’m constantly frustrated by how little static support there is. It’s getting better though, eg precompilation in 1.6.

3) I haven’t had any issues here outside of the occasional update to VScode Julia extension that botches things

4) I’ve had quite a lot of luck on discourse and GitHub issues, as well as slack for the occasional small question

Rust is great for use cases where you have specific resource management needs, although seems an odd choice for general purpose scientific computing.


> had 0 problems with Revise

Try changing a struct.


Yes, that is an important limitation of Revise that I hope goes away.

Until then, consider this work-around: https://github.com/BeastyBlacksmith/ProtoStructs.jl


That's definitely a problem. I work around by naming all my structs with a `_1` suffix and `%s/_1/_2/g` (VIM find-replacing) every time a struct changes. Otherwise I find Revise works great.


Wrap it in a module, then you can change it without any problems. It's not ideal, but it works very well in practice.


Revise can’t change structs in modules either. The only sane way I’ve found to handle this is using a Pluto notebook. But that’s not for everybody.


I can definitely redefine a struct in a module.


Really? I’d love to know how to do that because I run against this issue every day. Are you in the package’s environment or using `includet`?


What you've pointed out might all be valid. But they seem to be issues with maturity. When Python was at a similar age, did it have all these issues worked out?

Not sure if you can generalize it to "Julia community isn't interested".

If we have more ppl liking Julia and using it for general purpose computing (which it is capable of), then more of these tools will become mature overtime.

Also, Julia is not in the same niche as Rust though, it's more like a Python, so not sure if the comparison is apt.


Problem for Julia is, the Python ecosystem gets better every year. Python might be getting better faster than Julia can catch up.


I highly doubt it for one core feature: performance. For a little while it looked like Python 4 might fix this (by using type hints to JIT) , but it was quickly given up. IMO a big shame as it is the one thing holding python back.


I think Julia actually does a pretty good job of showing that the problem isn't types. The problem is semantics. Julia's restriction is eval to the global scope is a perfect example of this. It has a pretty minor on code, but a massive effect on performance. Python has a ton of things like this where a slightly different (and totally breaking) semantic change prevents optimization.


My poinr wasn’t about type system per se but the ability to compile code down to natively execute without going through an interpreter. Whether you solve that through type inference or type hints I honestly don’t care, but afaik Julia has an easier job by allowing you to make types explicit on the data structure side while being flexible for the algorithms. Semantics are certainly also a big part, but it would be nice to have first party support for what Numba does, JITing a reduced set of semantics but keep everything else compatible.


Julia runs fast enough to catch Python, because Python will never improve the main reason why Julia was created in first place, the community embracing a JIT compiler instead of re-writing working code into C.


I would also add:

5. The module system is very primitive.

6. The testing framework is extremely barebones.

I agree with your assessment, Julia is great for crunching numbers etc., but I wouldn't write a whole application in it.


Anything in specific you feel like is missing from the module system and testing framework? Knowing what features people want helps a lot with setting development priorities.


I could make a laundry list of features (for example: not being forced into a linear order of includes (or at least having idempotent "include"s), no namespace collision for test files, not having to list all the tests in "runtests.jl",[1] support for pending tests, support for parameterised tests, proper assertion libraries with more useful error output, etc. pp.), but I also feel like Julia developers could just look at any of the other major general purpose languages, almost all of which have better module systems and testing frameworks because these are things that matter to application developers.

[1]: I actually tried to automate some of this in a project: https://gitlab.com/pfrasa/morcrypto/-/blob/master/test/runte...


Some of these would definitely be good to have issues/pull requests to track progress on. That said, I'm not sure how useful parametrized tests are in a language with first class functions. Also, for more useful error outputs, there was a PR merged just a few days ago that helps with improving the error output.

In general though, specific issues with use-cases (or even better PRs) are the best way to direct change in an open source language. It's much easier to prioritize development if we know that a specific feature is wanted.


It seemed to me that no one actually made projects that used the module system at all? Like no name spacing what so ever, just a bunch of include!s


Every package is a module, and Julia is well-known to not have monorepos and instead split packages into small modules. So I don't quite see how this follows: there's no PyTorch, instead there's Flux, NNLib, NNLibCUDA, NNLibAMDGPU, CUDA, GPUCompiler, KernalAbstractions, ... (it keeps going), all of which are documented packages in their own modules that make up the ML stack.


I feel like the DifferentialEquations package is a screamingly obvious counterexample to your claim. It has many sub-modules. Moreover, the whole point of multimethods is to live in the main semantic namespace, you do not need as many name spaces because there are no names to clash.


> you do not need as many name spaces because there are no names to clash.

You know, maybe you're right and I can't prove you wrong, but this is the kind of thing that IMHO application developers read and think "yeah no, been there, done that, never again". Because many people's experience is that names do clash, again, and again, and again, and I fail to see why multimethods should solve that.


Does this example help: There is a name defined in Base, e.g. `pop`. In languages that do not use multumethods, when you created your own "FancyContainerLibrary" you need to create a new `pop` and ensure it does not clash with `Base.pop`. In languages with multimethods you just extend `Base.pop` to work on your new type. Julia can do both: it has perfectly normal support for namespaces, but frequently you will be extending Base methods instead of working in your own namespace (of course, all your private functions are usually unexported and available only from your namespace and do not polute the global space).


I've only kicked the tires on Julia a bit, but it seems like while methods are less likely to clash, they still might if you happen to use the same name and argument types? Or maybe unexpected method resolution causes a bug?


While that might happen (and probably cause a method redefinition), there is an important convention that helps preventing it: your package must either own the function or at least one of the types used for arguments, otherwise you're practicing type piracy [1]. I've seen an automated scripts that can detect type piracy, so hopefully it could be part of a linting toolset eventually since not everyone might be aware, but at the very least popular packages shouldn't have them - or at least not in a way that may cause bugs (and if any package has it unintentionally it's probably worth creating an issue).

[1] https://docs.julialang.org/en/v1/manual/style-guide/#Avoid-t...



I'm kind of unclear what module signatures would be since Julia lets code define modules at runtime, so I don't think this could be meaningfully defined for Julia.



For most intents and purposes one can use a (potentially Singleton) type passed as a argument to functions instead of a module containing functions. And doing that gets you all the tooling and power you could want. (But probably in a different way to you want it). Since multiple dispatch takes care of that.

You can compare and contrast MLDatasets.jl (uses submodule per dataset) vs CorpusLoaders.jl (uses a extra type argument per dataset)


I agree with you guys. I have been coding in Julia for 2 years and very little has happened in tooling. For example, the graphical profiler is buggy. I love the language though. Hoping things will change!


> after some consideration of OCaml, but unfortunately the multi-core story still isn't there yet

It is supposed to land in the release after 4.13, which is the next one.

Regarding the scientific computations library there is Owl[1][2] which now has an almost finished book[3].

[1] https://ocaml.xyz/

[2] https://github.com/owlbarn/owl

[3] https://ocaml.xyz/book/


I'm just curious. Do you do symbolic math in Rust? I found a crate [1] that is a wrapper for SymEngine. However, it is no longer maintained.

[1] https://github.com/podo-os/symengine.rs


How did you cope with the lack of REPL on Rust's side?


Rust REPL'S do exist, though I don't need it very often because the static typing allows me to know exactly what's going on with each line of code.


It is amazing and frustrating to me how much latency affects my productivity. I wish I could more effortlessly switch between tasks, or just meditate and relax while I wait for something I just did on the REPL to finish. But I don't. More often than not, a 30-second delay to e.g. plot something destroys my ability to stay in a productive zone.

I have been using Julia 1.6 since the release, and I'm so grateful not only that some computations run a bit faster, but that the interactivity is so improved.

Even seeing a progress bar can help me stay focused, because it can be fun to watch (parallel precompilation is especially fun). When a command just hangs, I feel left in the dark about how much boredom I'll have to endure.


Very interesting UX observations on interactive programming.

Kinda like mirrors in waiting areas, I wonder whether judicious logging messages about the compilation process (not a wall of text, but just enough) will serve to keep users engaged while also educating them about the compilation happening on the backend. That will help users feel more agency, and also improve their mental models of how to structure their code/activity for faster compilation.


That actually helps. Talking from years of experience doing Android development, where in the early days (and still on some projects) you would have +5 minute rebuild time, and it's supper annoying to check minor things. Having more logs actually helped it seem faster, even tho it wouldn't be, but also you could know where you are stuck and why, which helps identify bottlenecks in build processes.

Sometimes it would be enough to just google what the long task does and see "oh wait I don't actually need that step for my daily development" (e.g. resource crunching, crash reporting initalization etc) or point you in the direction of "why isn't this caching itself".

Quite a useful thing, and should be available as an option at least. In a REPL it might be context noise, but should still be there as a --verbose option.


This is the theory behind spinners over separate page loads.


This is why git was such a game changer. Making a commit to SVN took a considerable amount of time, especially back in the day on slow internet connections. Making a commit with git is generally instant and doesn't break flow. It changes the way you work. People used to avoid committing with SVN, sometimes doing it only once a day. That seems completely insane now.

I have very little tolerance for latency in my tools. It's one of the top deciding factors for me when choosing which tools to use. It's that important.


Well as one of the aims as I understand it for Julia was as an alternate to Fortran.

You should consider your self lucky when I started the compile link process took a while and that is assuming you had real time access.

I recall one of my colleagues who was running her code on an ICL at AWE (Underwater Weapons Establishment) coming into the terminal room logging in and sighing "48 jobs in GEORGE ahead of hers.

A year later we brought a Pr1me Super Mini which made things a lot faster.


I'm a big fan of Julia. It does live up to its speed claims. I've implemented the board game Go in Python, Rust, and Julia and Julia is definitely closer to Rust in speed. Same algorithms were used for all implementations.

Julia's time to first plot still has some problems. The Plots library can build animations, but the time to first animation on my computer is like 10 minutes, and the time to second animation is another 10 minutes. Probably just a bug, I haven't found any other case that takes so long.

I've also mentioned before that a reinforcement learning algorithm I ported from Python/PyTorch to Flux was faster, not because of training times, but because of all the other stuff (and RL has more "other stuff" than supervised learning) that goes on outside the core training loop is so much faster.


Multiple dispatch and generic programming make Julia a productive language to work with. However, a given program may have unnecessary function specializations (which affect startup compile time) or unexpected dynamic dispatches (which affect runtime performance). These can be addressed with some important development patterns: checking for type stability via @code_warntype, using opaque structs or @nospecialize when appropriate, etc. I've found the Julia community to be very helpful with regard to performance on the forums, slack, and zulip.


This is exactly the problem with Julia, to achieve those (soo much vaunted) c-like-speeds you need quite a few contortions.


I don't think those contortions are all that crazy compared to what you have to do in C though anyways. And you can choose where to spend your effort in optimizing functions.

Sometimes, a function really is performance critical so you spend a ton of time fiddling with it to make it blazing fast, other times you write something non-optimal but straightforward. It's all the same language though, and unlike tools like Cython, you don't lose all your nice high level language features when you drop down to 'high performance julia code'.


> I've implemented the board game Go in Python, Rust, and Julia and Julia

Oof. I reread this several times consecutively as "I've implemented the board game in Go, Python, Rust ...".


Same, and I literally have a correspondence game going on in another tab


what's your ogs id? I challenge u


ID is Ada Countess of Lovelace. What's yours.


> I'm a big fan of Julia. It does live up to its speed claims. I've implemented the board game Go in Python, Rust, and Julia and Julia is definitely closer to Rust in speed. Same algorithms were used for all implementations.

"Closer to Rust than to Python" is a wide range. Almost any non-scripting language (e.g. Java, OCaml, Haskell, Dylan...) would qualify, and that's normally not enough to give them a reputation as "fast".


I’ve implemented some graph traversing algos in Julia and 100x faster than Python. Really in the same ballpark of C++, but much simpler code.


> 100x faster than Python. Really in the same ballpark of C++, but much simpler code.

One could say the same for any of the languages I listed (maybe not "much simpler code" in the case of Java).


If you want a harder guarantee, well written numerical code will be as fast as any other language. If you find a counter example, let us know (it's probably something that should be fixed).

For string processing and other gc heavy code, Julia has further to go. Julia's gc is pretty basic, and needs a lot of love. That said, for dataframes like workloads, Julia usually manages to hand with the best (data.table), and it's rarely much slower.


My assumption is that there's nothing outstanding or remarkable about Julia's performance. Comparing to Python/Perl/Ruby/TCL is a good way to make any language seem fast, and comparing to C++ is a good way to make any language seem expressive. I'd be far more interested to hear about comparisons with alternatives that might actually be competitive - non-scripting languages with a reputation for being expressive, such as ML-like functional languages, or recent general-purpose languages like Swift/Kotlin/C#.


If you want performance benchmarks vs Fortran, https://benchmarks.sciml.ai/html/MultiLanguage/wrapper_packa... has benchmarks with Julia out-performing highly optimized Fortran DiffEq solvers, and https://github.com/JuliaLinearAlgebra/Octavian.jl shows that pure Julia BLAS implementations can compete with MKL and openBLAS, which are among the most heavily optimized pieces of code ever written. Furthermore, Julia has been used on some of the world's fastest super-computers (in the performance critical bits), which as far as I know isn't true of Swift/Kotlin/C#.

Expressiveness is hard to judge objectively, but in my opinion at least, Multiple Dispatch is a massive win for writing composable, re-usable code, and there really isn't anything that compares on that front to Julia.


> If you want performance benchmarks vs Fortran, https://benchmarks.sciml.ai/html/MultiLanguage/wrapper_packa... has benchmarks with Julia out-performing highly optimized Fortran DiffEq solvers, and https://github.com/JuliaLinearAlgebra/Octavian.jl shows that pure Julia BLAS implementations can compete with MKL and openBLAS, which are among the most heavily optimized pieces of code ever written.

That seems to be very Julia-specific comparisons, which I'm sure will be oriented towards the use cases Julia has been designed for. I'm more interested in "neutral" benchmarks and more general-purpose computing areas.

> Furthermore, Julia has been used on some of the world's fastest super-computers (in the performance critical bits), which as far as I know isn't true of Swift/Kotlin/C#.

That's more a reflection of culture than performance though. Back when I worked with a bunch of data scientists they would happily run Python or R on our Spark cluster, consuming oodles of resources to do not very much, but balked at writing Scala, even though their code would have run much faster.


That's fair. That said, the fact that writing a BLAS implementation in a high level language seems like a favorable benchmark does kind of show my point that for numerical computing, Julia is really fast (especially for a high level language).

As a general purpose language, I'd probably categorize Julia as fast, but not crazy fast. Numerical computing is definitely where most of the man hours have gone into development. As a general purpose language, Julia probably ends up at around the same place as C# (not as fast as C++, but still pretty respectable). That said, this is an area where the main thing Julia needs here is more optimization on some of the libraries that already exist, and some new libraries to extend the language's reach further. I don't think there's anything fundamental holding it back from getting closer to C++ performance in general computing. It uses the same compiler (LLVM), and it's pretty clear that you can get Julia to output the same assembly code. It's just a matter of having enough devs pushing ecosystem forward.


For a non-numerical benchmark where Julia does really well, you should also check out https://h2oai.github.io/db-benchmark/. There's a bunch of work to be done to improve DataFrames.jl more, but it's already one of the fastest tools for what it does (despite having only reached version 1.0 a few weeks ago).


Ok, that's a little more interesting - a "neutral" benchmark of several tools, though still very much in a niche use case.

Is there any effort to include Julia in the TechEmpower Benchmarks? If it's trying to be a general-purpose language, that's the first place I'd look.


It's already on there for some of the benchmarks at least. It generally is roughly between 30th and 60th place depending on the benchmark (with outliers in both directions). That said, HTTP.jl is not a library that has had a ton of work in Julia, and it's probably not what you want to write a high performance webserver in at the moment. I don't think this is a fundamental limitation, but just a domain that needs a couple thousand dev hours to make competitive with some of the more mature frameworks. Another part of the problem is that Julia has some fairly well known garbage collection performance problems. GC performance hasn't been a major dev priority as of yet, since the language is much better than most high level languages at eliding allocations or stack allocating variables (so it doesn't make as much garbage for many workflows). That said, for GC heavy tasks (string stuff often is), it is a major problem that is starting to get a bunch of effort put into it. Hopefully in a year or 2, these issues will be solved. It's nothing too fundamental, just that the current GC only has 2 generations, and the heuristics for moving objects between generations are not tuned especially well.


Do you have any documentation / Github repos where you build those Go implementations? I am a huge fan of the game and would be curious to see how you built it, specifically in Python / Julia.



> but the time to first animation on my computer is like 10 minutes

Have you tried 1.6 already? I find it's substantially faster.


Yes, I was doing it just this week with Julia 1.6.


what's your ogs id? I challenge u


I use and love Julia but I really wanted to see the general purpose language that is claimed. On one hand, you see amazing scientific libs like DifferentialEquations.jl, on the other side, things like the PackageCompiler.jl mentioned just sucks at generating binaries for daily basis.


Isn't "generating binaries" just as bad for other interpreted (interpeted-ish) languages? If you generate a "python binary", you need to package python with your binary. Same for perl/ruby. It just seems weird that people expect julia to be able to do that. It is cute that PackageCompiler.jl exists and it is cute that more AOT compilation work is being currently done, but it seems crazy to expect Julia to be good at making binaries (and I would say that about python and perl too).

And by extension, it seems weird to me to complain that Julia is not a general purpose language because it can not generate binaries. What stops me from making the same statement about python, which is definitely general purpose?


The reason Julia should be able to do this is that it uses LLVM to generate machine code "just ahead of time". As such, (at least for type stable code), it should be possible to save the code we generate. The main place where static AOT matters for Julia isn't full applications, but libraries. Being able to generate static libraries would allow Julia to replace C++ and Fortran much more fully in places like python libraries. Furthermore, this capability is likely crucial in getting major further improvement in time to first plot. Currently `@time DifferentialEquations` takes about 11 seconds on my computer, but if more of the code could be statically compiled at precompile time, that could be reduced dramatically.


This is true for many functions, but afaik the llvm code is only generated for a function paired with the types of the arguments that it was called with. Since Julia functions are for the most part 'generic' and work with a wide range of argument types, you would have to restrict the compiled binary or library to a specific set of argument types. Some functions also have type instability and can't be made into pure llvm.


This is the first time I see my confusion so clearly addressed! Thanks, this makes total sense now!


> The main place where static AOT matters for Julia isn't full applications, but libraries.

That depends on the use case. With improvements in static compilation, julia could probably be a good application language. Game development would be an interesting market.


> Isn't "generating binaries" just as bad for other interpreted (interpeted-ish) languages?

I think this is the case for at least the most popular JIT'd languages: Java, C#, JS, and PHP. Also for the most popular interpreted languages: Python, Ruby and also PHP. I don't know about Visual Basic and R though.

I know that an exception is Dart, that combines a JIT and an AOT. I think EmacsLisp can now be also compiled, but I don't know if it works with all the code and is just free performance, or something more limited.

Edit: as pointed at by pjmlp, Java and C# already combine an AOT and a JIT. What I meant by the comment on Dart is that it can either be run with a VM or compiled to produce binaries.


Java and C# also have combined JIT and AOT since they exist, .NET moreso.

Other examples are Lisp and Scheme variants, Eiffel, OCaml, Haskell, Prolog.


The main SDKs and programming paradigms for Java and C# both don't mesh well with AOT, though. Reflection, heavy reflection based frameworks.

Not that many places use Java/C# AOT compilation, except for games/iOS apps.

Almost every place I've seen using Java/C# was using JIT.


Android uses a mix of JIT/AOT, just as most Java embedded development.

As for not everything being supported, well that is no different from having C++ code with RTTI and exceptions disabled, or being forced into a specific linking model due to possible problems with a third party dependency.


and that is why they failed to provide proper toolkits for android smartwatch and were forced to call samsung for help


Better watch the Wear talks, the only thing they are getting from Samsung is the Faces designer and the commitement to drop Tizen for Wear.

Is is still plain old Java/Kotlin/C++ as usual.

Press should be better informed, but that is asking too much in modern times.


> Isn't "generating binaries" just as bad for other interpreted (interpeted-ish) languages?

I might have a counter example: Common Lisp (a compiled language) can be run from sources as a script (like interpreted-ish languages) and we can build self-contained binaries. With SBCL they weight ±20MB minimum (proprietary implementations do tree shaking) and they start instantly.


> And by extension, it seems weird to me to complain that Julia is not a general purpose language because it can not generate binaries. What stops me from making the same statement about python, which is definitely general purpose?

I agree that generating binaries don't make a language general purpose, I just tried to give an exemple of an ad hoc non scientific thing that is considered "important" to the community (its an official project) that is stuck. The common sense would be just list the web frameworks but I dont think its fair simply because there is no interest on it (yet).


A non-scientific thing I've been doing for the last few months at the day job, with Julia.

1) querying a time series database of systems metrics at scale for (large) fleets. This is being done via a JSON API. Directly in Julia.

2) Creating data frames from these queries, and performing fleet wide analytics. Quickly. Millions to hundreds of millions of rows in the data frames, typically 4-20 columns. Directly in Julia, no appeal to a 2nd language.

3) leveraging the power of the language to post process these data sets before analysis, to remove an "optimization" that reduced data quality.

4) operate quickly on gigabytes of queried data, threading and sharding my requests, as the server can't handle large requests, but it can handle parallel ones. Poor design, but I can work around it ... trivially ... with Julia

5) creating jupyter lab notebooks for simple consumption of these more complex data sets by wider audiences, complete with plots, and other things.

No science done here ... well ... data science maybe ... and this is specifically in support of business analytics, process optimization, etc.

Julia is an excellent language for this, 10 out of 10, would recommend.


What's a fleet in this case?


> Isn't "generating binaries" just as bad for other interpreted (interpeted-ish) languages?

Python is famously bad at this. I hope Julia's proponents don't stop at "look we're only as bad as Python".


Julia claims to "solve the two language problem". i.e. prototype in python, rewrite in c++. The two language problem is not solved with Julia if you can't effectively generate binaries.


I have never really heard the name "two language problem" to refer to what you are describing. Whenever I have heard these words it has referred to "I want a high-productivity newbie-friendly introspective language like python, but I do not want to write C modules when I need fast inner loops". Julia seems to solve this already, without providing compact binaries.

A sibling comment made a point about "compiling down to shared libraries" which seems similar to what you are describing, but that seems like it has little to do with "the two language problem".


Right. It used to be referenced on the front page of julialang.org Seems they don't really use that in the sale pitch anymore. Maybe that proves my point. It's easy to find references to julia claiming to solve the two-language problem though. I am someone who this two-language problem they speak of addresses.

I love Julia. Which is why it's so painful that I have to rewrite all my elegant Julia prototype code in C++, so I can compile into a shared lib for the users. Every. Single. Time. Two languages.

Now that it isn't the main front and centre claim, I feel a bit less bitter about using it as a prototyping language.

Waiting another 5 years and maybe it really will solve the two-language problem.


I think the main reason they stopped referencing that claim is that "two-language problem" means too many different things to different people. But yes, real static compilation would be great.


I think this is correct. My understanding of the two-language problem was probably not the same as the one they claimed to solve. More likely they meant: write slow code in python and then optimize inner loops with c.

Anyways, I'm glad I did invest in learning Julia. Just disappointed it didn't save me from C++. On to Rust!


It could only solve the two language problem if the "users" were writing their program in Julia themselves. Otherwise your ideal solution is still using two languages. And if they are writing their program in Julia, there's no reason to compile your code into a shared library; you'd just share a Julia package with them.


It’s not at all bad for Common Lisp, which is a superlatively interactive language.


Could you elaborate or give examples? I guess all the complaints about binaries come from people used to something like Common Lisp, but while I have a general understanding of what a lisp is, this type of "provide a binary for your interpreted-ish language" is an incredibly foreign idea to me.


Common Lisp was designed to be interactive and have a REPL. You can redefine functions, classes, etc. on the fly with strictly-defined semantics. (You don’t have to guess what happens if you, say, re-name a field of your class.) This is insanely useful during development, where you absolutely want to avoid doing full recompiles every time you make a little change you want to test. Some people call this “interactive and incremental development”. (Of course, you always have the option to just re-compile everything from scratch if you so please.)

Common Lisp was also designed to be compiled. Most implementations these days compile to machine code. Compilation is incremental, but ahead-of-time. That means you can start running your program without having yet compiled all the features or libraries you want. You can—while you’re in the REPL or even while your program is running—compile and load extra libraries later. Compilation is cached across sessions, so you won’t ever have to recompile something that doesn’t change.

Despite Lisp being mega-interactive, incremental, and dynamic, just about every implementation of Lisp allows you to just write out a compiled binary. In the implementation called “Steel Bank Common Lisp” (SBCL), from the REPL, you just write:

    (sb-ext:save-lisp-and-die "mycoolprog" :executable t :entry-point 'main)
which will produce a statically linked executable binary called “mycoolprog” using the function called “main” as the entry point.

Unless you’ve specifically programmed it in, there will be no JIT lag, no runtime compilation, etc. It will all just be compiled machine code running at the speed of your processor. (It’s possible and even easy to invoke the compiler at run-time, even in your static binary, which people rarely do, and when they do, they know exactly what they’re doing and why.)

All of this is a complete non-issue in Lisp, and hasn’t been for about 35 years (or more).


I am hopeful that Julia should be able to get this cross-session caching of compiled code. Would make restarting the REPL (to e.g. add a field to a struct) much less frustrating.


Caching is easy. The hard part here is correctly invalidating the cache. Specifically, if a user is using different libraries between sessions, figuring out which methods have been overridden (or inline code that was overridden) becomes complicated.


Yeah, restarting a Lisp REPL after you’ve compiled your code is transparently essentially instantaneous, because everything is cached and checked for changes, and 99% of the time most of your code and nearly all of your dependencies aren’t changing hour to hour.


The way that common lisp does this though is pretty much creating a core dump that you can execute, which isn't what most people are expecting from an executable. It's not a _bad_ way, it's just pretty unique to common lisp.


What exactly are people “expecting” from an executable? It is a piece of binary code on disk, full of machine code, that runs without a VM or external runtime libraries.

    ./mycoolprog
just works. From a user perspective, there’s no difference.

The binary itself isn’t structured like a typical one with C debugging symbols, etc. But it’s also not some “faux binary” like a bunch of .pyc bundled as data and unzipped when the program runs. It truly is machine code, just arranged differently than a bunch of C ABI functions.

I claim most people running binaries don’t care about the memory layout of the binary. I certainly am never thinking about that every time I run `grep`. You don’t debug Lisp programs with C’s tooling. You use Lisp’s tooling.

(Unless, of course, you use an implementation like Embeddable Common Lisp, which does compile your Lisp program as a C program, and does produce a non-image-based executable. That’s the beauty of Lisp being a standardized language with multiple conforming implementations.)


Common Lisp actually doesn't specify a mechanism for this. There are implementations that can compile to e.g. DLL:

http://www.lispworks.com/documentation/lw71/DV/html/delivery... https://franz.com/support/documentation/current/doc/dll.htm

Unfortunately none of the FOSS implementations have this ability (to my knowledge). There is nothing inherently in Common Lisp that mandates the "core dump" delivery model.


One correction to my post above - Corman Lisp is a Free implementation that does have the ability to produce DLLs, but it is limited to Windows:

https://github.com/sharplispers/cormanlisp/blob/master/docum...


how about ECL? That should do that, too.


Speeding up compilation itself is one approach to the latency issue. And there's also the idea of blending compilation and interpretation using e.g. https://github.com/JuliaDebug/JuliaInterpreter.jl .


V8 reduced the start up times of WebAssembly this way but with a single pass compiler instead of an interpreter. Here's the article: https://v8.dev/blog/liftoff


One difference is that web assembly is typed and is designed to make these compilers possible.

In the context of Javascript, V8 did the opposite. Originally they had a baseline compiler for Javascript but now they use an interpreter, which reduces the startup latency.


For me the issue manifested as a 10 sec latency to format a Julia file using Format.jl

Solved via flags to disable JIT and brought it down to a couple of secs. Native binary would be much nicer.


Two seconds to process a tiny text file enters well into the realm of "completely unusable" in my eyes.

It wouldn't be so bad if the Julia developers acknowledged that this is a valid concern (that they are not dealing with it right now for whatever reasons) and that the ecosystem will not be considered complete until this fundamental problem is solved. But this is infuriatingly not the case. Instead, they tell you that you are "holding it wrong" and that this is not really a problem, that your usage is "niche", that the interpreter is alright as it is, and that the time to first plot is unlikely to ever go below a hundred milliseconds. I find it really depressing for the language itself is incredibly beautiful. My only hope is that an independent, fast and unix-friendly implementation of the language arises, thus freeing the reference implementation of the efficiency burden and allowing it to be simpler. Something like the lua/luajit split.


I promise ecosystem will not be considered complete until this fundamental problem is solved. That said, you can have time-to-first plot below a hundred milliseconds right now if you put Plots into your system image - that's always been an option. System images have workflow issues which is why they're not used more.


Sounds great, thanks! That is certainly reassuring to hear. I'm very happy to see Julia evolving.

EDIT: also, sorry for the mis-characterization of Julia developers! I may have dealt until now with users and "fanboys" not real devs.


I wonder where you got the impression that latency and precompilation performance are not valid concerns. This has been the _main_ focus area for the devs for a long time. It's pretty much all anyone has been talking about for over a year, and serious improvements have been made.

Here's a blog post that goes into some detail about the ongoing efforts to improve compiler latency: https://julialang.org/blog/2020/08/invalidations/


> I wonder where you got the impression that latency and precompilation performance are not valid concerns.

Now that you ask it, I realise it's been mostly through a few HN interactions! Every time I raised the issue in Julia posts over the last few years, I have been consistently ridiculed by purported Julia defenders. For example, in this very thread you can find a case of that.


It think that the discussions on HN tend to be quite combative, and members of the Julia community (including myself) can get a bit worked up over negative claims that we find overly sweeping. That could lead to some dismissiveness.

I do get frustrated by characterizations of the interests of the Julia community that are very broad, and are also in evidence here. E.g. somewhere upthread one person claimed Julians are only interested in Jupyter Notebook workflows, which is a totally alien statement to me. Jupyter doesn't seem to have a very large mindshare as far as I can tell, I've never even tried it myself.

Static compilation and analysis comes up regularly on Discourse, but among the scientist part of the community, there is naturally less emphasis on this. Please don't extrapolate too far based on some isolated interactions.


> the time to first plot is unlikely to ever go below a hundred milliseconds

How is that controversial or disappointing!? Why would anyone bother optimizing this? Nor is matlab/octave/python any faster.

`python3 -c "import matplotlib.pyplot as plt; plt.plot([1,2,3]);"` takes 600ms on my (powerful) workstation and that does not even include creating the plot window.

To be clear, I do believe there is much more work to be done to decrease latency in Julia, but your targets are ridiculous. And as a regular on their bugtracker and forum, the devs definitely acknowledge these issues and have many times said it is one of their main priorities.

By the way, if you want streaming live updated plots, this latency to first frame is not a problem. It is already straightforward to make such fast live plots in Julia (although it does not fit my personal taste for how to do it).


> your targets are ridiculous

Only because you are not used to somewhat fast programs:

    $ /usr/bin/time -v gnuplot -e 'set term png; plot sin(x)' > sin.png
    ...
    User time (seconds): 0.02
    System time (seconds): 0.00


Come on, it is silly to compare a full general purpose language against a special-purpose tool. Yes, grep is also better than julia at searching for a string in a file.

Julia is a terrible replacement for gnuplot and gnuplot is a terrible replacement for julia.


Why is it such a ridiculous comparison? Gnuplot is still interpreting a language, loading plotting code, etc. If Julia folks wanted to, they could bundle pre-compiled plotting code that loads as fast as memory moves bytes. They don’t want to, of course, likely because it’s inelegant, but they could, and a general-purpose language doesn’t stop them from doing that.


You can already bundle pre-compiled plotting code in your Julia sys-image if you want. But Julia is not a plotting tool so it would be ridiculous to optimize it just for plotting. I want ODE solvers to have less latency, should I start expecting gnuplot to have built-in ODE solvers or the official installer of Julia to have the ODE libraries pre-compiled?

Maybe this example would make it clearer: why does your argument not apply to Python? Should we expect python libraries to come pre-cached so that the first time I load `sympy` I do not need to wait for tens of seconds to have .pyc files created. Or about matlab?

Again, I am all on board with the idea that julia needs lower latency and if you look at what their devs say, they also agree with that. But expecting Julia to be super low-latency (lower-latency than python/c/matlab) for your pet task is silly.


I gave a proof-of-concept argument as to why something doesn’t need to take as long straight out of the box with no customization. Python is doing it sub-1s. You can also include a non-optimizing interpreter. My point is that being a general purpose language doesn’t inherently limit you in any way; instead it’s one’s choice of implementation strategy.

Another strategy: when a user installs Julia, they select “fast-loading” libraries. You’d be surprise how small changes in UI/UX make huge perceived differences in quality and performance. I bet “Julia can already do this” too, but nobody does it because it’s not idiomatic and it’s not recommended up front.

At the end of the day, people don’t complain about Python or MATLAB as much because they feel nicer. If it feels nicer because of some other reason than absolute time, then they’re doing something about UX that Julia is not, because everybody really does feel Julia is extremely sluggish to use.


One thing that would help a lot is if julialang.org offered downloads with a bunch of common packages baked into a sysimage (modeled roughly of of anaconda). My rough estimate of packages to include would be Plots, BandedMatrices, DifferentialEquations, CSV, DataFrames, Flux, PyCall, (and probably some others). Having a distribution with a lot of the heavy hitter built in would make workflows using those packages much snappier.


Are there some lists of most-downloaded packages?


By github stars: https://juliahub.com/ui/Packages

There are better stats available (downloads), but I do not know of a good place to play with them publicly.


>Another strategy: when a user installs Julia, they select “fast-loading” libraries. You’d be surprise how small changes in UI/UX make huge perceived differences in quality and performance. I bet “Julia can already do this” too, but nobody does it because it’s not idiomatic and it’s not recommended up front.

Don't most people already use a bit of PackageCompiler.jl in their workflows? Maybe it just need to be mentioned more in introduction videos, but it's literally one line of code to do this and I don't see how that isn't the solution already. Most users I know have been doing it for a few years now.

For reference, with Plots and DifferentialEquations the command is:

using PackageCompiler; create_sysimage([:Plots,:DifferentialEquations])

It's simple enough to add to every tutorial. I think the issue is that intro videos just need to be updated.


I'm new to Julia. I had heard of PackageCompiler, but assumed that was only for creating redistributable binaries for others. I also assumed `]precompile` was the best I could do to speed up libraries.

My suggestions: 1- please forget about the intro videos. Those are impossible to keep up-to-date, and are not a good reference resource. Sure, a future video should explain PackageCompiler, but that is not sufficient. 2- please update docs.julialang.org with all such guidelines. This is the likely the first place people look, so if there are important guidelines that everyone could benefit from, this is where they belong.


Sure! That's why I set a somewhat reasonable target at being just 5x or 10x times slower than a specific-purpose plotting tool. But even that is considered to be chimerical! (or, in your words, "ridiculous")


Now you are just putting words in my mouth. I would completely agree that 10x latency to first plot is a reasonable target (i.e. first plot in about a second, like you get in python and much faster than what you get in matlab/mathematica). And plenty of devs closer to core of Julia and the plotting libraries in it would agree.

And to be clear, I do expect my second plot to be ready in tens of milliseconds.


I'm not saying you are wrong - but why 100ms and not 50ms or 200ms or 2000ms?


> they are not dealing with it right now for whatever reasons

eh..., it's open source. maybe they are just waiting for you to take care of it. just joking. i suspect there isn't ppl


I am very confused by the claim in the first sentence of the article: "On March 24, version 1.6.0 of the Julia programming language was released. This is the first feature release since 1.0 came out in 2018". How is 1.6 a "feature release", but 1.1-1.5 are not!? Especially given the enormous new set of multi-threading features in 1.3.

Edit: ah, thanks for the response, it seem I just do not know the difference between "feature" and "timed" release.


That’s the terminology used in the development process. The releases > 1.0 and < 1.6 are called “timed releases”. It doesn’t mean they don’t contain any new features.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: