Hacker News new | past | comments | ask | show | jobs | submit login
Why Does Julia Work So Well? (ucidatascienceinitiative.github.io)
61 points by Tomte on Oct 21, 2020 | hide | past | favorite | 83 comments



Hey, I guess OP here. These are part of the workshop notes from 2017 (with a few updates, but mostly left intact) on "A Deep Introduction to Julia for Data Science and Scientific Computing". It's a full day 8am-3pm workshop (with extended problem exercises to take home!) that can be found here:

http://ucidatascienceinitiative.github.io/IntroToJulia/

The section that was pulled out to be the heading link here was the portion that comes after first explaining the semantics of the language. The "Why Julia?" section is an explanation of why Julia is fast but also the engineering trade-offs that were made to get it done (and thus the drawbacks of the approach). That is the page is that is linked, but if you're curious about the greater context of this page (and learning the language) do check out the rest of the notes. I will say that they are a bit old (one comment noted that a compiler specializations on literals does handle one of the cases I pointed out in the trade-offs) but for the most part are still a good introduction to the Julia programming language. Surprising to have people tell me this ended up on Hacker News but exciting to see!


> Type stability is the idea that there is only 1 possible type which can be outputtted from a method.

> This right here is multiple-dispatch: the * operator calls a different method depending on the types that it sees.

> If you have type stability inside of a function (meaning, any function call within the function is also type-stable), then the compiler can know the types of the variables at every step.

> Type-stability is not the only necessity. You also need strict typing.

Maybe it's time for me to embrace dynamic languages.


Julia is dynamic. I've described this before, but the type annotations are used for multiple dispatch, not for static type checking.

The quick example of why this is powerful is multiplication: # Assume you only have addition defined

    # Quick and dirty baseline
    function *(l::T, r::T) where {T<:Int}
        out = 0
        while l > 0
            out += r
            l -= 1
        end
        out
    end

    # Great, now let's say you have a data type, rational numbers:
    struct Rational{T<:Int} 
        numerator::T
        denominator::T
    end

    # And you want to define multiplication on rational numbers
    *(l::Rational, r::Rational) = Rational(l.numerator * l.numerator, l.denominator * l.denominator)

    # Cool, but multiplying Ints by rational number should also be defined
    *(l::Rational, r::Int) = Rational(l.numerator * r, l.denominator)
    *(l::Int, r::Rational) = *(r, l)
The reason this is so cool, and why, yes it's still dynamic, is that I can call x * y on two variables, x and y, of which I know nothing about. The most specific method defined which matches their type will be called to handle it. If there isn't one defined, that's obviously a method error just like in your favorite other language!


I've seen a fair number of Julia posts on HN lately without any big recent developments in the language. It's a fine language but there is a type of preachy blindsighted hype that comes across as arrogant and is extremely offputting. Not quite at the level of Rust but it's getting there.


It's rather unclear how to respond to this. Suppose I find (which I do) that:

* Julia is a joy to use and fits my use case and/or personal development style really well;

* Other tools have not been such a natural match to my needs;

* Julia has its fair share of limitations, and someone else with different priorities would quite reasonably not find it an appropriate tool;

* Julia is not widely known, and other people may benefit from discovering something about Julia, just as I did.

How should I share my enthusiasm (and, with luck, bring Julia to someone who would enjoy it) without promulgating preachy blindsighted hype? Do I have to wait for a big language release?

The parent comment hints at a knife's edge that must be walked when telling people about Julia. Anything like "I use Julia for X" or "in Julia you can do Y this way" is--understandably!--met with "okay, but why Julia at all; why not a more mainstream tool like Python?" But if you elaborate with "I use Julia for X because I find that in this case Julia > Python for reason Z," you risk being labeled an anti-Python zealot.

So, serious question to the parent comment author or others who feel similarly: what are some ways in which you would find it non-offputting for the Julia community to share its enthusiasm or experience with a broader audience?


Well, as someone that has been lightly interested in Julia lately this sort of content is something I am interested in. It feels like there are many of us.

My own impression of the Julia community has been rather pleasant. I think they are actually almost too humble and comfortable with their niche. Rust is... the polar opposite?

YMMV.


There's no coordinated effort to do this.

These are all independent pieces that were written at different times by different people.

Where do you see the arrogance in the OP? It's showing exactly what's being claimed


"Why does Julia work so well", "The unreasonable effectiveness of Julia" tbh read as pure fanboyism at best, but more realistically as an effort to drum up a language that could need some attention, or getting "web mentions" for some hidden investment agenda.


"The Unreasonable Effectiveness of X" is a reference to the well-known 'the unreasonable effectiveness of mathematics in the natural sciences'. It's turned into some sort of meme. Maybe you've seen 'the unreasonable effectiveness of recurrent neural networks' before.

What you were referring to is a talk by a core developer at JuliaCon 2019 titled 'The unreasonable effectiveness of multiple dispatch' [1], which has less of a fanboy-vibe imho.

Edit: Oh. I didn't notice the arstechnica article had this exact title. That's a bit cheap...

[1] https://www.youtube.com/watch?v=kc9HwsxE1OY


I suspect that the title of the Ars Technica article was a reference to the talk title, which was a reference to many other pieces with similar titles, which are all ultimately references to the original one.


As someone who’s actually used the language (sigh) the “hype” around Julia is more excitement about a modern language that’s hitting a broad sweet spot in a unique way.

Julia is a delightful combination of ease of use, clarity, and performance. It lets one concentrate much more on the problem at hand than language related boilerplate (I’m looking at you, Rust).

I’m hopeful that Julia will become one of the most used general purpose programming languages in the long run. It’s certainly got what it takes!


This was part of a tutorial series for a data science program

The "unreasonable effectiveness" article wasn't, to my knowledge, by a community member. It's normal for magazines to have enthusiastic headlines.

You're making a lot of assumptions here that aren't very kind or well supported.


But the article is literally about why it optimizes as well as it does, and what are the engineering trade-offs. In the first section:

>But what we will see in this example is that Julia does not always act like other scripting languages. There are some "lunches lost" that we will have to understand. Understanding how this design decision effects the way you must code is crucial to producing efficient Julia code.

This is more about how the compilation process is working to both help people understand how to write optimal code and to understand what optimizing compilers need in order to work. The engineering trade-offs, like slow performance of globals, errors in functions that would change types, etc. are all demonstrated. If you read a full description of the engineering trade-offs and think that it's a marketing tool, then you must think it made the right trade-offs? I don't understand why that would be bad.


I was making a general comment about the community. I don't have a problem specifically with the original post (which is several years old now).


A "general comment" that is name calling without specifics, suggestions, or examples isn't very constructive.


We've had the exact same thing in the past with other hot new languages and frameworks. I guess lots of people are discovering Julia now, and that's also relevant to HN.


You could post a list of topics that are frequently posted on HN and mark each topic as positively commented on or negatively commented on. Maybe add a trending graph over time too.

Edit: I’m not being snarky. Such a list or view could be useful to get the pulse of the times on HN.


I’ve thought for a while that it would be fun to research which users post the most with respect to which subject. I think if you had a time series of HN hot topics and a graph of evangelists, then maybe some interesting patterns would appear. Covariance between posters would be especially interesting to me.


I think you are confusing hype and buzz. Those are different things.


I agree, and I think it's worse than Rust because Rust at least mostly lives up to the hype, and the main caveats (compile time, borrow checker complexity, etc.) are well known at this point.

How many people know that loading the graph plotting library takes 30 seconds, and that is working as intended?

https://github.com/GiovineItalia/Gadfly.jl/issues/251

I've tried to use Julia a number of times and the experience is always worse than Matlab.


"the", graph plotting library? There are several and gadfly isn't the most popular.

Compile times have dropped precipitously, around 5 sec on master for plots.jl. less than one second if you use the package compiler.

This doesn't purport to be a review of Julia, it's one small part of a tutorial series so not sure why you are expecting a discussion of every trade-off.

Also, what part of Julia is being oversold here?


> and that is working as intended?

I cannot reconcile your claim with those quotations from the GitHub issue:

> It's frustrating, but I don't think it's a permanent situation. A lot of work has already been done on pre-compling Julia code, and when that's extended to packages, things should be much better.

> Making this stuff more convenient is definitely desirable and will happen.

> Agreed. But do keep in mind that the procedure I outlined allows you to avoid precisely this problem, with very little effort

> That would be nice. It will happen when someone makes it happen.


The issue is from 2014...

For "recent" development, there's a lot of work going into fixing latency issues and improving load times of packages generally: https://github.com/JuliaLang/julia/pulls?q=is%3Apr+label%3Al...


It's totally fair and honest to take an issue from 2014(!) and go ''well this is why X sucks''.


According to githut, usage is declining. I wouldn’t read too much into it, but it is one empirical measure.

https://madnight.github.io/githut/#/pull_requests/2020/3

I assume the marketing is to try and revitalize things. I’m not clear on what The Julia developers’ profit model is, but I would assume they have incentives to promote public interest.


Download metrics show usage doubling every year.

As I said above, There's no coordinated effort to do this.

These are all independent pieces that were written at different times by different people.


That only indicates downloads, though. One user might download many times for various reasons (containers come to mind). Does the Julia compiler have telemetry like .NET core that monitors it’s users?

I would also be quite shocked if whoever spent $5M in seed capital for JC would not expect or demand some amount of digital marketing.


What a weird metric to use. Its not growing as fast as Python and Javascript, so the stats go down, so the conclusion is its usage is declining? I dont think it works that way.


The source data shows decline in absolute numbers for pushes. 1432 last quarter, 19073 at peak Q3 2015. It seems to flatten out in 2018 and stays in the 1500-3000 range.


Most of the work moved the package ecosystem as the standard library shrank with many of the things in there moving to separate packages (for example, even the package manager is a separate package: https://github.com/JuliaLang/Pkg.jl). The result of this is that what you now see in Julialang/julia is mostly compiler work (and some tweaks to Base things like linear algebra overloads). With that said, it's quite an actively developed compiler!


JuliaCon 2019: 500 attendees JuliaCon 2020: 20,000 attendees

Doesn't seem to be declining.


Was JuliaCon 2020 remote? That probably has more to do with it. I can’t possibly imagine there are 40X more active users in one year. The niche it serves isn’t particularly large, either.


Yes, it was completely online. The comparison doesn't make sense.

Also, next time it's IRL the numbers would seem to indicate a huge setback, so I wouldn't use those :D


> ∇f(u) = α*u; ∇f(2)

> sin(2π)

Is that legal Julia? That's pretty awesome, but how do I type this? I think I need a new keyboard.


It's legal Julia.

Julia editors or VS Code with the Julia plugin support LaTeX commands and will perform autocompletion and convert it to the unicode character.

For example, if you write \nabla it will offer autocompletion and write ∇.

I think it's a neat feature and makes a lot of sense in scientific computing contexts but I feel I wouldn't like a code-base that abuses it too much. Sprinkled here and there can help make code more readable, succinct and pleasant.


I think it's great for mathematicians, but I shudder to think what some Scala library builders might do if they get there hands on this.


> but I feel I wouldn't like a code-base that abuses it too much

This is why I think it was a poor design choice to include the feature and actively encourage its use. The benefit of potentially more succint code is far outweighed by having to look with a codebase written entirely in Greek and there's no way to enforce moderation.


It's impossible to impose moderation in any language and you don't need unicode characters to write stupid code. Java programmers were writing HelloWorldFactory classes way before Julia existed.


Style guide for the base language is to generally stick to ascii, and many of the big packages do that too. It's really nice in examples, analysis scripts and the like though.


The REPL (and most tooling I think) supports LaTeX-like abbreviations that autocomplete to the corresponding Unicode symbol, so no need for a special keyboard.

See here https://docs.julialang.org/en/v1/manual/unicode-input/


You can write \pi and then press tab. It works in the julia repl, and also on most editors that have julia plugins.

Works for all greek letters, and other unicode characters.


Every Western-style input method should come with a Compose function enabled, and a simple tutorial on how to use it.

For example, on Windows I use Wincompose [1], that lets me type things like "⋄*g" for Greek gamma: 'γ', or "⋄ee" for 'ə', or even "⋄⋄plane" for a '' (and of course the list is customizable). The "⋄" symbol represents the Compose key, which I chose to map to the Caps Lock key on my keyboard.

[1] http://wincompose.info/

Edit: HN removed the airplane icon.


I've been thinking about a dynamic keyboard where each key is a tiny oled screen that shows what character you're going to type in this particular mode. And then you're free to switch from regular to math to Greek to Chinese if you want, and the keyboard simply adjusts.


Such as Optimus Maximus, for example?

https://www.youtube.com/watch?v=qj7GYU-wedo


Julia's design is like Haskell's monads. It requires a bit of effort to sort out exactly what Julia's value proposition is. That's perhaps why people have the urge to write a Blog post to explain what a monad is, ahem, I mean to explain why Julia is a paradigm shift in programming languages and something worth checking out, once they figured it out.


There's some truth in this. I think many of its enthusiasts aren't jaded multi-lingual programmers evaluating a new tool among many. They had some other problem they care about, for which Cython/Numba or whatever was obviously an awful kludge, and eventually they got tired of fighting it & looked around. And then discovered that this new tool they found was a beautiful thing in its own right, and they want to tell people.


Does anyone know why the text claims that "2^-5" is an error, but the output shows it returning 0.03125?


It used to error, but exponentiation with negative literal integers was made to work. Perhaps the notebook hasn't been updated since then.


I would be interested to read how they made it work, given that it seems to violate the principle of type-stability. Did they just implement the slow runtime type check, or did they do something clever?


I think it's parsed specially, but I could be wrong.

Definitely no runtime check


It goes to `literal_pow`, one way to check is to overload this:

    Base.literal_pow(::typeof(^), x::Int, ::Val{n}) where {n} = 
        (println("power $n"); float(x)^n)
    pow = -5
    2^-5    # yes
    2^pow   # no, hence an error


Yes indeed: these are workshop notes from 2017 so it's a little old. In fact, it's surprising to see on Hacker News haha.


Question - because I haven't looked into Julia yet:

What's the main selling point of Julia, over Python?


For me the main selling point is speed. I had coded a simulation in Python which was slow (would have taken days for the whole thing to run). This version did not use Numpy. Then I coded it in Python so that I could use Numpy. Just to be able to use Numpy I had to use arrays, where it did not seem natural for the problem I was solving. The numpy version gave me a 5 times speed up compared to the original Python version. I then coded it in C, which was about 20 times faster than the numpy version (and 100 times faster than the original version). Then I coded it in Julia, which was much easier to code in than coding it in C as well as Numpy. I could use loops where they felt more natural to me. It was essentially like the C code, but using many high level functions that Julia provides. Not a least bit difficult than the original Python version, would even say that it was a bit easier. The Julia code ran just as fast as C.

Other benefits (not over Python, but I believe worthwhile when choosing a language): very friendly community to beginners. You ask simple questions and get answers without any attitude. As I noted above, many high level functions that make dealing with data much easier.

If you are going to try it out, I would recommend using the Long term support release, rather than the current stable release. With the latter, I have had issues where some packages often don't work for me. With the LTS release, I have yet to experience that. Maybe I am doing something wrong, but for me LTS release works great.


The power of Lisp but without parentheses so Lispers can have something to complain.

But seriously, for me is something similar to Python vs Ruby (I used to prefer programming in Ruby before Python became the default ML language and made so I pretty much had to change). Like Ruby, Julia is a language that I can adapt to the problem instead of having to adapt the problem to the language. In Python there is only one obvious way of doing it (at the higher level) no matter what the problem is, usually some hard to read chain of pandas transformation followed by not much better numpy/jax or pytorch method. In Julia I feel less restricted, and if a problem is clean using loops and arrays I do with loops and arrays, if it's clean with vectorization I do that with broadcasting, if someone enabled some macro that looks almost like a description of the problem I can use that as well.

And the result ends up not being what people fear that having many ways to do something means the programmer will abuse it and write unreadable code golf, but having a more natural approach to each problem makes it easier for me to read after it's done, it's not language A twisted in language B (like those pandas operations), but something that is close to a direct representation of language A that I don't need to convert mentally or even with a dictionary.


The main selling point is the speed. Julia is high performance - in the same class as C, C++, Rust etc (perhaps 1.5x slower on average, but still pretty fast). This allows you to write all your code in Julia rather than just the surface level wrapper.

Then, because it's 20 years younger than Python, it has tonnes of ergonomics you would expect of a modern language: A nice, built-in package manager. Good code introspection. Easy ability to call C. An actually usable REPL. Built-in unit testing etc.


There are several: https://arstechnica.com/science/2020/10/the-unreasonable-eff...

Also comes with some, but relatively light trade-offs for what's gained, IMO. The worst of trade-offs, compile overheads, are mostly not intrinsic to Julia and are being reduced every release and by packagecompiler.jl.


Of course it's intrinsic to Julia. What do you mean? Julia is a JIT-compiled language, and so gets the curse of JIT-compilation: Compilation latency. That's unavoidable*

* Okay, technically, someone could write an interpreter for Julia, or do streaming compilation, but I wouldn't hold my breath for this to actually happen.


Julia already has an interpreter, and the core team has plans for tiered compilation.

Also the JIT compile time can be minimized to the point where it's a very minor thing. There's also smarter and slimmer static compile, which is also planned.


Speed. Multiple dispatch. A language actually designed for numerical work and data science and all that, instead of a language that accidently found a niche there. A chance to look at the status of Python and fix it's pain points.


S was designed for numerical work/data science, so R did inherit lots of this.

To be fair, this was back in the seventies, so it's good we're getting another language designed for this problem space.


native performance for things you run repeatedly.


It may sound silly but one thing that's holding me back is the "end" keyword at the end of every construct which seems unnecessarily verbose (and seems inconsistent as it isn't paired with a corresponding "begin").


It's not so pretty, but what are the alternatives? Making indentation meaningful has downsides (copy-pasting can go very wrong) and using up ascii characters just for this means they can't do other things (like Vector{Int} type parameters) and still often takes up a line.

You can avoid it quite a bit, e.g. f(x,y) = (z=x+y; sqrt(z)) defines a function, with two statements separated by ;.


I think the C/C++ convention with curly braces is ok, and doesn't make the parser much more complicated. Any closing brace has an obvious matching opening brace. And the meaning of any opening brace can be easily deduced from context, I suppose.


While I like do...end for symmetry, most of the time the end will have the same indentation as whatever opened it, and everything in between will be one indentation ahead (especially if you're using an automatic formatter), so I can follow easily without reading the content. And whatever opened it will also explicitly tell the context (if it's a method definition, if/else, for...).

In the end my code ends up like python but with an extra end, which, while adding a possibly unnecessary line of code it at least makes it trivial to copy and paste something into the repl (or other part of the code) without worrying about indentation.


Other things are OK too. I quite like that if/elseif/else/end only needs one closing, no other openings.

I don't know whether using {} for both types & blocks would be hard to parse, maybe it's possible. I have heard & can believe that parsing <> as a 4th ascii bracket is pretty tricky.


I have heard & can believe that parsing <> as a 4th ascii bracket is pretty tricky.

That's probably because < can appear without >, e.g. in a<b.


Personally, I much prefer end to braces.

They have the benefit also of being usable for indexing and array expressions


Julia uses `end` because it is derived from MATLAB which also uses `end`, it's not to save ASCII characters.


AFAIK it is the opposite of what you have said. And although Julia has a lot of inspiration of Matlab syntax, it is absolutely not derived from it. https://julialang.org/blog/2012/02/why-we-created-julia/


There are many fine languages that have used `end` to delimit blocks, including: Pascal — an elegant, classic language (and my first, personally); Ruby — another gem (get it). And yes, also Matlab, which is hardly unique in this respect.


Right, but Julia is not derived from Ruby or Pascal.


Julia isn't derived from Matlab either and was influenced as much by Ruby as by Matlab.


> derive: base a concept on an extension or modification of (another concept).

Obviously I didn't mean the implementation was based on MATLAB because that would be impossible.


Then why does MATLAB do it?


Initially I was wary of such as well, when I tried out Elixir, which is said to have similar syntax to Ruby.

In the end it is the question, what kind of delimiters for your expressions you want to use. Some languages use end keywords, others parenthesis of whatever kind and yet others use indentation and whitespace instead, which perhaps could also be considered to be a form of delimiters. Having visible and matching delimiters or support for recognizing what end delimiter matches which start delimiter is essential for editing code efficiently, in my opinion.

I use a lot of Python, but the whitespace usage and non-visible delimiters are one of the downsides of the language syntax for me. Editing code when everything is an expression and is clearly delimited by matching parens is so much more efficient and fun.


This is unfortunately playing some semantic language games to market for Julia. “type stable” multiple dispatch just means statically typed multiple dispatch (aka implementation overloading, aka specialization). It does not matter if the vtable used to map a function call to the chosen type-specific overload occurs in an otherwise dynamically typed language or not, it’s just your grandpa’s same old static multiple dispatch.

This is fundamentally no different than using fused typing in Cython. The main advantage Julia has is that the whole language can use this by default, and so something that is a restricted computation domain in Python (like numpy arrays or pandas DataFrames which both heavily use C extensions and Cython) is just a more general “normal” data type in Julia.

Whether this matters is hotly debated and open to wide disagreement. For example, I’m of the opinion that layering in micro-optimizations solely when you prove a need, and not relying on them to be abstract enough to automatically work throughout any use of the language, is a good thing. I prefer that control & distinction. I don’t want numpy ndarray to be arbitrarily subclassable with lots of custom child behavior that inherits the C extension targeted optimization - both because inheritance itself is a universally poor design concept as opposed to composition, and because the targeted optimizations are just that, targeted, and not intended to work properly for any general subclassed use case.

Julia is a very impressive language, but it irks me to no end that so much written about it reads like a marketing brochure aimed to denigrate Python or MATLAB and cheerlead Julia.

There’s a nasty arrogance in some Julia proponents’ writings on the topic, seeming to come from a place where they “know better” about hard core language design, JIT compilers, multiple dispatch... but this stuff has been heavily researched and understood and used in practice in eg Python for decades. It’s really tone deaf to say “yeah but we have such a clever multiple dispatch design” — congratulations, welcome to the 90s.


> It does not matter if the vtable used to map a function call to the chosen type-specific overload occurs in an otherwise dynamically typed language or not, it’s just your grandpa’s same old static multiple dispatch.

Not sure what you mean. vtable's only allow for single dispatch in C++. For double dispatch you need the visitor pattern. For multiple dispatch, I don't know.

Closest to what Julia has in C++ is Bjarne's Open Multi-Methods article [1], but it was not implemented.

Another interesting quote from Bjarne's more recent How Can You Be So Certain article [2] is this:

> Unified function call: The notational distinction between x.f(y) and f(x,y) comes from the flawed OO notion that there always is a single most important object for an operation. I made a mistake adopting that. It was a shallow understanding at the time (but extremely fashionable). Even then, I pointed to sqrt(2)and x+y as examples of problems caused by that view.

This "mistake" is something Julia gets right.

[1] https://www.stroustrup.com/multimethods.pdf

[2] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p196...


You can use a tree for multiple dispatch (visitor pattern is not required, not sure why you mention that) or you can use a plain vtable approach where you take all the function operands and place them in a tuple. The type of that tuple (as a single object) then determines the multi-operand dispatch in the vtable approach. Some languages do make it harder to construct these tuple types, but in principle vrable is fine for multiple dispatch even in eg C++.


Sure, you can implement it yourself. But even if you create an amazing tree/hashmap-based type-signature-tuple inheritance-aware multiple dispatch object, you still have to think about how to make other packages use it.

I would also be interested how you would deal with the dynamic case of type erasure in this setup. Suppose I have a base class Base and derived classes A, B, and C. There's a set of functions

    f(A*, B*, A*)
    f(C*, A*, A*)
for some combinations of pointers to A, B and C. Given a

    std::vector<std::tuple<Base*,Base*,Base*>>
how do you apply the correct f to all its values?


When you accuse a lot of people of "nasty arrogance", I think it's a good idea to read your own posts and try to weed out examples of the same.

You use some pretty harsh and condescending terms as you dismiss a topic quite a few people are enthusiastic about. Is that really productive? What do you get out of it?

You also make some strange technical remarks. Julia does not allow "arbitrary subclassing" (or typing). It doesn't allow subtyping at all, except for of abstract types. In fact, composition is the preferred approach in Julia, and frequently held up as preferable to inheritance in general.

As for your preference that it's better to have good performance for only some selected parts of the language, in stead of good performance in general, that seems so idiosyncratic that I'd like a better explanation. It's hard to tell if you are serious.


> it’s just your grandpa’s same old static multiple dispatch.

This is very much not the case. People are often confused about this — probably because the difference between static and dynamic dispatch in general can be confusing, never mind multiple dispatch, but also because they look so similar syntactically. They are, however, very different beasts. It's particularly worth noting that statically overloading function on argument types is entirely equivalent to putting argument types in the function name — it can be done completely mechanically (that's actually how C++ does it internally) and adds no additional expressiveness over C, which has no built-in dispatch of any kind.

Here's a discussion where someone was convinced that Julia was not actually doing dynamic dispatch, which has many posts which may help clarify the issue for you (or anyone else): https://discourse.julialang.org/t/julia-isnt-multiple-dispat.... There's a lot of posts expressing and addressing various aspects of this confusion, so it's hard to know which specific one to point to, but the bottom line is that despite the superficial similarity, static overloading and dynamic multiple dispatch are very different: the former is a purely syntactic convenience, while the latter is an entirely different language paradigm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: