This is unfortunately playing some semantic language games to market for Julia. “type stable” multiple dispatch just means statically typed multiple dispatch (aka implementation overloading, aka specialization). It does not matter if the vtable used to map a function call to the chosen type-specific overload occurs in an otherwise dynamically typed language or not, it’s just your grandpa’s same old static multiple dispatch.
This is fundamentally no different than using fused typing in Cython. The main advantage Julia has is that the whole language can use this by default, and so something that is a restricted computation domain in Python (like numpy arrays or pandas DataFrames which both heavily use C extensions and Cython) is just a more general “normal” data type in Julia.
Whether this matters is hotly debated and open to wide disagreement. For example, I’m of the opinion that layering in micro-optimizations solely when you prove a need, and not relying on them to be abstract enough to automatically work throughout any use of the language, is a good thing. I prefer that control & distinction. I don’t want numpy ndarray to be arbitrarily subclassable with lots of custom child behavior that inherits the C extension targeted optimization - both because inheritance itself is a universally poor design concept as opposed to composition, and because the targeted optimizations are just that, targeted, and not intended to work properly for any general subclassed use case.
Julia is a very impressive language, but it irks me to no end that so much written about it reads like a marketing brochure aimed to denigrate Python or MATLAB and cheerlead Julia.
There’s a nasty arrogance in some Julia proponents’ writings on the topic, seeming to come from a place where they “know better” about hard core language design, JIT compilers, multiple dispatch... but this stuff has been heavily researched and understood and used in practice in eg Python for decades. It’s really tone deaf to say “yeah but we have such a clever multiple dispatch design” — congratulations, welcome to the 90s.
> It does not matter if the vtable used to map a function call to the chosen type-specific overload occurs in an otherwise dynamically typed language or not, it’s just your grandpa’s same old static multiple dispatch.
Not sure what you mean. vtable's only allow for single dispatch in C++. For double dispatch you need the visitor pattern. For multiple dispatch, I don't know.
Closest to what Julia has in C++ is Bjarne's Open Multi-Methods article [1], but it was not implemented.
Another interesting quote from Bjarne's more recent How Can You Be So Certain article [2] is this:
> Unified function call: The notational distinction between x.f(y) and f(x,y) comes from the flawed OO notion that there always is a single most important object for an operation. I made a mistake adopting that. It was a shallow understanding at the time (but extremely fashionable). Even then, I pointed to sqrt(2)and x+y as examples of problems caused by that view.
You can use a tree for multiple dispatch (visitor pattern is not required, not sure why you mention that) or you can use a plain vtable approach where you take all the function operands and place them in a tuple. The type of that tuple (as a single object) then determines the multi-operand dispatch in the vtable approach. Some languages do make it harder to construct these tuple types, but in principle vrable is fine for multiple dispatch even in eg C++.
Sure, you can implement it yourself. But even if you create an amazing tree/hashmap-based type-signature-tuple inheritance-aware multiple dispatch object, you still have to think about how to make other packages use it.
I would also be interested how you would deal with the dynamic case of type erasure in this setup. Suppose I have a base class Base and derived classes A, B, and C. There's a set of functions
f(A*, B*, A*)
f(C*, A*, A*)
for some combinations of pointers to A, B and C. Given a
When you accuse a lot of people of "nasty arrogance", I think it's a good idea to read your own posts and try to weed out examples of the same.
You use some pretty harsh and condescending terms as you dismiss a topic quite a few people are enthusiastic about. Is that really productive? What do you get out of it?
You also make some strange technical remarks. Julia does not allow "arbitrary subclassing" (or typing). It doesn't allow subtyping at all, except for of abstract types. In fact, composition is the preferred approach in Julia, and frequently held up as preferable to inheritance in general.
As for your preference that it's better to have good performance for only some selected parts of the language, in stead of good performance in general, that seems so idiosyncratic that I'd like a better explanation. It's hard to tell if you are serious.
> it’s just your grandpa’s same old static multiple dispatch.
This is very much not the case. People are often confused about this — probably because the difference between static and dynamic dispatch in general can be confusing, never mind multiple dispatch, but also because they look so similar syntactically. They are, however, very different beasts. It's particularly worth noting that statically overloading function on argument types is entirely equivalent to putting argument types in the function name — it can be done completely mechanically (that's actually how C++ does it internally) and adds no additional expressiveness over C, which has no built-in dispatch of any kind.
Here's a discussion where someone was convinced that Julia was not actually doing dynamic dispatch, which has many posts which may help clarify the issue for you (or anyone else): https://discourse.julialang.org/t/julia-isnt-multiple-dispat.... There's a lot of posts expressing and addressing various aspects of this confusion, so it's hard to know which specific one to point to, but the bottom line is that despite the superficial similarity, static overloading and dynamic multiple dispatch are very different: the former is a purely syntactic convenience, while the latter is an entirely different language paradigm.
This is fundamentally no different than using fused typing in Cython. The main advantage Julia has is that the whole language can use this by default, and so something that is a restricted computation domain in Python (like numpy arrays or pandas DataFrames which both heavily use C extensions and Cython) is just a more general “normal” data type in Julia.
Whether this matters is hotly debated and open to wide disagreement. For example, I’m of the opinion that layering in micro-optimizations solely when you prove a need, and not relying on them to be abstract enough to automatically work throughout any use of the language, is a good thing. I prefer that control & distinction. I don’t want numpy ndarray to be arbitrarily subclassable with lots of custom child behavior that inherits the C extension targeted optimization - both because inheritance itself is a universally poor design concept as opposed to composition, and because the targeted optimizations are just that, targeted, and not intended to work properly for any general subclassed use case.
Julia is a very impressive language, but it irks me to no end that so much written about it reads like a marketing brochure aimed to denigrate Python or MATLAB and cheerlead Julia.
There’s a nasty arrogance in some Julia proponents’ writings on the topic, seeming to come from a place where they “know better” about hard core language design, JIT compilers, multiple dispatch... but this stuff has been heavily researched and understood and used in practice in eg Python for decades. It’s really tone deaf to say “yeah but we have such a clever multiple dispatch design” — congratulations, welcome to the 90s.