The litmus test for simplicity of a programming language design is its compilation speed, if the language compile fast it is simple but if the language compile slow it is overly complex. Modern programming languages like Go and D have fast compilation, but C++ and Rust compile much slower. Go is a direct descendent of Wirth's languages namely Modula and Oberon, while D is not albeit some of its feature like nested function is directly taken from Pascal [1]. Interestingly both were designed by authors with engineering background, and personally I think the simplicity is not a coincident since typical engineers loath to embrace any form of complexity.
I think that’s an oversimplification. By that measure any assembler would be simple, yet assembly is not simple at all. Most esoteric languages compile super fast, but are usually complicated by design.
Also, it’s not even a sound measure: There are C compilers that are extremely fast and some that are a lot slower but achieve better performance. Java compiles to byte code and performs compilation to machine code during execution. Interpreted languages sometimes only use an AST for execution and don’t compile to machine code at all.
Assembler is very simple as a language family. It is not simple to use.
The real challenge is combining the two.
I also think focusing on different compilers misses the point, which would perhaps be better expressed by to what extent a naive compiler implementation for the language would be likely to be fast or not.
E.g. Wirth's languages can be compiled single pass, and are fairly easy to compile without even building an AST. Some other languages can fit that too, but for many it's not that you need an AST and multiples passes just to generate good code, but that in some cases it gets increasingly impractical or impossible to compile them at all without much more machinery than Pascal or Oberon needs.
Simple depends on the context. You may say programming in assembly language is simple,
but it is only simple from the context of writing processor instructions; if you think
high-level, like accessing fields in a struct, then programming in assembly complects (or weaves)
field access with processor instructions and it turns into a complex thing (from the point of view of accessing fields in a struct).
> The litmus test for simplicity of a programming language design is its compilation speed, if the language compile fast it is simple but if the language compile slow it is overly complex
No. OCaml for example has a really fast compiler (like Go), but I would not call it simple. It does have PPX (PreProcessor Extensions) which are like macros, so you can't "blame" the lack of them either.
And everything that uses LLVM is _always_ going to be _slow_, no matter the language.
More to the point, OCaml belongs to the ecosystems that has the golden route, offering multiple implementations, allowing to pick the best one for each workflow.
If Rust had an interpreter in the box algonside the compilers like OCaml (there are multiple ones as well), it would already make a big different in development workflows.
Is there any particular reason for LLVM being slow? Does it do a lot of complicated optimizations when generating code, or is it designed in a way that makes it slow?
I’ve heard it used to be lean and fast. Then new developers came in, new features were implemented, and it bloated over time. Thus it wasn’t designed in a way that makes it slow. It grew in a way that makes it slow.
Someone from the panel clearly mentions that clang and GCC have become slower as some big names have removed their support from the projects (most likely meaning Apple and Google).
A possible reason is the use of Static Single Assignment (SSA). While it makes many optimizations easier, it has to translate its IR out-of-SSA to generate code. This is very expensive as it needs computation of dominance on the Control Flow Graph, insert copies and much more. But it's just a guess.
> The litmus test for simplicity of a programming language design is its compilation speed
Simple for programmer to understand and use (small, orthogonal, consistent) isn't always going to be the same as simple for the compiler writer to implement.
> Go is a direct descendent of Wirth's languages namely Modula and Oberon
This assumption comes up again and again, but the evidence is rather poor. There are very few and only marginal similarities. The most obvious is the receiver syntax of the bound procedures. But this can be found in Mössenböck's Oberon-2, not in Wirth's Oberon. Although Wirth was a co-author, he ignored any innovations in Oberon-2 in his subsequent work. Go has a completely different focus and is essentially a further development of Newsqueak; it's definitely not a "direct descentant" of Modula or Oberon.
Part of the reason it persistently comes up is that Robert Griesemer got his PhD under Mössenböck and Wirth, and people not paying very close attention would probably also see the Oberon-2 connection as a confirmation via an indirect step rather than as an argument against the claim.
Yes, Mössenböck was his PhD supervisor (Wirth was only co-examiner). Personally I consider Oberon-2 a better language, but there are hardly any applications of bound procedures; particularly not in the Oberon systems developed at ETH, and surprisingly few in Linz Oberon either. And Active Oberon followed the more conventional Object Pascal approach.
It would be nice to have actual benchmarks of compilation speed of equivalent programs in different languages rather than just the runtime performance as is typical in language shootouts. Go has to me a surprisingly high time to compile a simple hello, world program (about 200ms and generates a 2MB binary). But I suppose that is a fixed overhead and perhaps it scales well.
Generally though, I'm disappointed if hello, world takes more than 20ms to compile -- which is of course true of pretty much every popular language.
The problem is that a hello world isn't sufficient to identify compilation speed, you'd need a program with thousands of lines that does barely anything. And then you're fighting IO as well. although that could be fixed by putting the program in /dev/shm first and running it.
> The litmus test for simplicity of a programming language design is its compilation speed
From a compiler's perspective, sure. Not being a compiler, I find that metric not very relevant.
Simplicity of a language is better gauged by how easy it is to express complex things in it, and how difficult it is for one person to comprehend what another wrote.
> From a compiler's perspective, sure. Not being a compiler, I find that metric not very relevant.
I'm afraid you're just showing a bit if survivorship bias. I can assure you that compilation speed is a critical trait of any programming language, and the only people who don't understand this are those who benefit from all the work invested by predecessors who did.
Think about it for a moment: how would your turnaround speed be if every single time you had to build your project to test a change you had to brace yourself to wait over one hour to get things to work? After a week of enduring that workflow, how much would you be willing to pay to drive down that build time to 10 minutes, and how much more would you fork to only require 1 minute to pull off an end-to-end build?
Compilation speed can be an important trait of a programming language (or more precisely, a dev env / buildchain). I remember writing code in M68000 assembly, the compile step was lightning fast because you didn't need one. I do also remember going near cross-eyed tracing code flow down narrow columns of vaguely varied yet similar-looking statements -- hours upon hours!
If your daily task build is taking over an hour on modern hardware, it's likely you have organizational problems masquerading as tech debt. No language choice will prevent that; good technical leadership can help mitigate it.
Thankfully C++ modules are on the right path to improve the story on the C++ side.
Using C++23 standard library, alongside modules, and cargo/vcpkg binary caches for 3rd party libs is quite fast.
Rust well, until cargo does offer the option to use binary libraries, it will always lag behind in what C++ tooling is capable of. Maybe if scache becomes part of it.
> Using C++23 standard library, alongside modules, and cargo/vcpkg binary caches for 3rd party libs is quite fast.
I don't think your assertion makes sense. The only thing that conan/vcpkg brings to C++ is precompiled dependencies which you don't have to build yourself. You get the same result if you build those libs yourself, packaged them in a zip file somewhere, and unpacked that zip file in your project tree whenever you had to bootstrap a build environment. The problems that conan/vcpkg solve are not build times or even C++, they are tied to the way you chose to structure your project.
With C++, you get a far greater speedup if you onboard a compiler cache tool like ccache and organize your project around modules that don't needlessly cause others to recompile whever the are touched.
[1]Nested function:
https://en.wikipedia.org/wiki/Nested_function