Here is an historical note for those who might be interested. In 1979 I did a Post-doc with Rod Burstall. I helped implement the programming language Hope, which was the first functional programming language to use pattern-matching in function definitions. The implementation was written in an older language called POP-2. Robin Milner and his group were at the the time working on LCF and ML, and they eventually incorporated Burstall's pattern ideas. Eventually, I recall there was a decision within the UK functional programming community to consolidate their efforts into what became Haskell. And of course we see how much Haskell has influenced the newest crop of imperative languages.
It is interesting (to me) to compare factorial in Rust with factorial in Hope:
Rust:
fn factorial(i: u64) -> u64 {
match I {
0 => 1,
n => n * factorial(n-1)
}
}
Hope:
dec factorial : num -> num;
--- factorial 0 <= 1;
--- factorial n <= n* factorial(n-1);
Note that, in Hope (unlike its own inspiration, Prolog, and, I think unlike Rust), the order of the rules does not matter: the most specific pattern takes precedence. Hope was primitive with respect to types, and did not use Milner's type inferrencing ideas. I don't think Burstall ever intended it to be a "real" programming language. When I left Edinburgh, Don Sanella took over from me, so I was not involved in writing the paper about Hope, but my contribution is acknowledged.
I implemented the pattern matching code, and the idea of pattern compilation occurred to me. I remember showing it to Rod. He freaked out at first, but after about a five minute harangue, the penny dropped, and I remember him saying "clever Michael", "clever Michael" in his charming way.
I've never understood the infatuation for this Haskell type syntax where you need to repeat the name of the function for each and every single match case. It's so needlessly verbose.
Most languages which allow for function head clauses also have case or switch statements. So you can use both. I like the pattern when there is a corner case that needs quite a bit of logic but it's not very important for how the code usually works, and you won't want an indent level on it. For example:
function(...) ->
case Arg of
SpecialCase ->
...
handle_odd_special_case;
Default ->
...
main_business_logic
end
Now imagine another level of case statements and all of the sudden the main path of the algorithm is squashed to the right with only a 40 columns left.
Each clause is categorically isolated, unlike a case statement (the alternative in Erlang, with which I’m most familiar) where code can precede the case statement and thus introduce bindings that can muck up the logic.
Yes but why repeat the name of the function every time?
It might make more sense if such clauses are scattered through the source, but they are pretty much all the time grouped together, so the name of the function doesn't need to be repeated all the time.
For example, instead of
factorial 0 => 1;
factorial n => n* factorial(n-1);
A lot of this has its origin in Prolog, and there having multiple full definitions is much more natural, given the nature of the language: each predicate can have any number of definitions and then you have back-track to find the right one. A trivial example:
parent(alice, brad).
parent(alice, cecilia).
parent(cecilia, david).
parent(david, emily).
parent(brad, felix).
parent(felix, ginny).
ancestor(A, B) :- parent(A, B).
ancestor(A, B) :- parent(A, M), ancestor(M, B).
The first section defines a small family: alice is the parent of brad and cecilia, cecilia is the parent of david, etc. (identifiers beginning with lower-case letters are atoms, the ones beginning with upper-case are variables). Note that for these first definitions without a body (called "facts" in prolog), it's perfectly natural to have multiple "definitions": you can have as many parent-child relations as you want.
The second predicate ("ancestor") is more interesting: it has two definitions: if A and B are people, then A is the ancestor of B if A is the parent of B, or if A is the parent of M, and M is the ancestor of B.
Notice that there are is no pattern matching happening (Prolog of course has pattern matching, it more or less invented the concept, but I'm not using it here). When you use "ancestor", Prolog has no idea which definiton you want, the patterns are identical. It has to try the first one, and if it fails, it backtracks and tries the second one.
Some examples of using this in the repl. First, "who are alice's children":
The semi-colon means "or", so this means "Child is equal to brad, or Child is equal to cecilia". Second, "who are ginny's ancestors":
?- ancestor(Ancestor, ginny).
Ancestor = felix ;
Ancestor = alice ;
Ancestor = brad.
The central idea here is that there is no "one definition" of a predicate or fact, you can have as many as you want, and prolog will search through them. Similarly, any query can have multiple values for the variables that make them true. Repeating the predicate name is the most natural way of expressing this idea. You don't have to, you could rewrite these so that they had only one definition, but that just makes what's going on less clear.
Repeating the name allows some optimizations but you'd be unlikely to use or benefit from them unless you were writing functions that returns functions and you could benefit from inlining some patterns and not others.
I think I've seen the pattern in some web framework code, byte libraries, etc.
Although semantically equivalent, the optimizer treats the following differently:
No way is it clearer. Moving a pattern match to the top level removes the obvious intent that two (or more) functions overload each other and remove the debug/trace point for figuring out why one body was chosen over another.
Might be wrong here, but if this is to work, will it work across different compilation units? E.g. what if there is a "rogue" version that matches some literal - in way it'll serve as AOP (aspect oriented programming), but it'll be more like monkey patching. It may be useful, but spooky. (Very basic rust/haskell knowledge, just familiar with the core concepts).
> Every function clause must be together. It is an error to put another item between two function clauses.
So, it would not. Then again, the proposal wasn't accepted so it's kind of a moot point. But you're totally right that without this it would be very tricky. Given that match arms have an ordering, I'm not sure you could come up with a semantic where it would work across compilation units.
I suspect that Rod Burstall was more interested in conceptual clarity than compactness. I often heard him say "Efficiency is the enemy of clarity". Of course he was probably referring to code, but he might have applied the same thought to language syntax.
I write Elixir (which uses this syntax) and Rust (which doesn't) and I think both are fine. I'm hoping Rust doesn't support this mostly because I don't think the language support this sugar solves any problems Rust has.
Lacking this is probably my biggest gripe when trying to pattern match recursive structures, though mostly closures, this saddens me, you could say my hopes have been dashed.
factorial(0) -> 1;
factorial(N) -> N * factorial(N-1).
At first I was suspicious of function head clauses but have really come to like them. They make the code much more readable and easier to maintain. You can add another clause and keep your change diff small. It could be a general catch-all clause at the end, or a more specific, restricted case at the front and so on.
Tracing is also much nicer. Fire up dbg or recon and it's easy to pick off a particular clause without much effort.
> It's been over 10 years since I last worked with C++ every day, and I'm nowhere near being a competent C++ programmer anymore. Part of that is because C++ has evolved, which is a very good thing. Part of it is because C++ is huge. From a decade away, it seems hard to be a competent part-time C++ programmer: you need to be fully immersed, or you'll never fit the whole thing in your head.
This is really true. I have worked with C++ for 15/20 years, reading and studying everything I could.
I've reached a good competence but every time I stopped for a while I immediately fell behind.
This is so true for almost everyone I’ve talked with. It’s almost like each release is its own language too. I’ve been programming in Rust recently, and didn’t realize how much I love it till I tried doing something in C++ again.
I will say I still really enjoy the simplicity of C, but the guarantees of Rust are really hard to beat.
Me with managed compiled languages. Still need to go back to C++ every now and then, unfortunately not all interesting libraries have bindings available.
C++11 you could say that about, but I don't think any other revision is anywhere close to needing that much work to get up to speed. Ownership semantics mean that many times there are much more elegant ways of doing almost everything. In later versions, not using the latest features is not such a big deal, you don't miss out on the same leap forward.
I am competent programmer in a general sense (knowing how to architect the software anywhere from entreprise stuff down to a firmware). I also use C++ and the fact that I do not really know lots of things about it does not bother me in a slightest. I just use the subset that fulfills my particular needs and am ok with this. If I need some feature that I do not know how it implemented if at all in C++ Google will give me solution in a minute.
You can get away with that with some languages, C# for instance has very little to trip you up. C++ on the other hand has dark corners that will cause problems. For instance not knowing about the issues with virtual destructors could come back to bite you.
It will but I happen to know about virtual destructors. I do some reading every once in a while to glance over the things. So some interesting pieces get stuck in my secondary RAM ;) But yes I know C++ could be a dangerous beast but so far it's been kind to me.
I think to get good at a given systems language these days the effort is the same. For instance, concurrency still requires the same underlying ideas to be understood whether C++ or Rust. After all the machine underneath is the same.
Rust has some nice-to-haves of course, but C++ has been able to grow up and evolve because of the rich feature-set the template meta-programming language has provided it.
One can simply stick to C++11 or 14 and work with those features alone to make strides in development. Hell, people have used C++98 for decades.
Personally, I think what C++ offers is the same as Rust. Both need a robust testing and code-coverage tools for correctness, but the end result requires simply good development practices and love for the work being done.
> Personally, I think what C++ offers is the same as Rust.
I think this dismisses incidental complexity that can certainly exist in general, and certainly does exist in C++.
Just look at how complex move semantics are in C++:
* you have & and && references,
* there are all sorts of categories (e.g. glvalues),
* in a template && means something different (but not always),
* std::move by itself doesn't actually do anything (yes I know it's a cast but this is still confusing at least upfront),
* even if you do pass the result of std::move (or otherwisie know you have an rvalue reference) to a constructor then it's still possible that the object will actually be copied with no error or even warning.
In Rust, the difference is enormous:
* When you move from a value, it's guaranteed that the old object's destructor will not be called, so you don't need a move constructor that hackily sets up an empty value so is destructor won't do anything;
* Move is always a bitwise copy so it's easier for the caller to understand and free for the implementor to implement (this is enabled by the previous point)
* Move is the default instead of copy which makes a million times more sense because you can just implement copy as a regular method that happens to return (move!) its result (whereas you could never implement move as something that returns its result by copying) and if you actually want to copy a value to a function you can just call the copy method and move that value to it.
This final one is key is really, it's not possible because of C++'s history and backwards-compatibility requirements and that's why it had to come up with the crazy system it now has. Rust has many nice features but really the move vs copy stuff by itself is a huge saving in complexity, and the fact that Rust manages it at all proves that it is not intrinsic to the problem.
(Disclaimer: I have worked with C++ on and off full time for more than 15 years, whereas I have only tinkered with Rust.)
You forgot about SFINAE, CTAD, decltype vs auto, perfectly forwarding references, guaranteed RVO (but only in certain ISO versions), initialization semantics, ...
These are advanced template techniques for people who want to dive deep into writing template libraries that are as generic as possible. Insinuating that they are a necessity to learn is ridiculous.
decltype is easy and only necessary when trying to make something more generic.
> guaranteed RVO
Also something that programmers don't even need to know exists unless they want to dig into it.
Except that is all nice and good, until one comes up with the naive idea to use Boost, and somehow they need to dive into the source code to track down a bug.
Or landing by parachute on a codebase that needs some work.
Or just having to integrate a C++ library done by other team into their managed language framework.
Ridiculous is how many in ISO C++ seem to ignore Bjarne's plea to not overcomplicate the language.
Now you are conflating the language itself with examples of it being used terribly.
I pointed out that your criticisms are a huge stretch in reality and instead of confronting that, you decided to ignore what I said and throw more out there that isn't even a part of C++.
Boost is mostly terrible and isn't hard to avoid. People don't need to use it and they don't need to debug it. This is far removed from talking about the language itself.
Boost is very relevant to the language's standard library, as it is the research ground for features that eventually end up in ISO C++ standard library.
Should we go through the ISO C++ features that were born in Boost?
Boost is an external library and it is known for its overly complex templates and long compile times. It isn't part of C++. The features that were adopted from being tested in boost do not have the same dependency problems. Your initial criticisms didn't hold, now you are trying to criticize external libraries as a failing if the language. Maybe you just want to criticize c++ with anything you can think of, no matter how ridiculous.
Ironically there are probably an endless streams of criticisms that could be lobbed at it, but everything you've mentioned are things that just aren't problems in practice.
This is a logical fallacy called appeal to authority. There is no resume that magically makes the stuff you are saying true. It's bizarre that you would say something like that, as if you don't want to explain or walk back what you are saying and think that you can just state that you are an authority.
I guess I watched the back to basics c++ series on move semantics and its not really all that bad. I did struggle with it beforehand though. I've never worked in C++ tbh. I does take the 2 hours but you have it when you're done. https://youtu.be/St0MNEU5b0o
Really? You can tell off the top of your head what the difference between an xvalue and a glvalue is? Which function has higher precedence in the overload list out of foo(string&) and foo(string&&)? (Note that's not a const string&!) That these are so obvious to you that there's no chance you would make a mistake about them when you're in a hurry trying to solve an actual problem that has its own complexities rather than playing games with C++'s unnecessary complexity? The one where foo(std::move(x)) doesn't necessarily move x is a particular killer (although at least it only causes a performance problem rather than noticable change in behaviour).
I have worked with C++ for a hell of a lot more than 2 hours since C++11 came out (the first version with move semantics) and worked with many others that have too, and I can tell you that "once you have it you're done" is just not true. I fully understand all the concepts but I can still make mistakes.
Yeah we totally believe you, you just keep on digging your heels in on this one cause that's how you get people to take you seriously and think you're smart.
I agree about the looseness in C++. However, the details of move semantics are not complex. For instance, reading first item of Effective Modern C++, “understanding type deduction” shows it’s pretty much the same as type deduction templates have used for decades. If one understands the aspects you mention above, good coding practices can easily be put in place and code reviews enforce.
I have read Effective C++ and it's excellent, and was a step on my journey to understanding move semantics! But reading a single item from it didn't instantaneously teach me everything I needed to know. I can say more specifically on this topic but that digresses from my original point, which was really meant to be more general and was steamrollered a bit in the other replies.
My original point was not that you can never understand move semantics, or that you can never write decent C++ programs. I've written a few complex C++ programs used in production that are pretty nifty if I don't say so myself! I was also not trying to say that it's unreasonable to have to read books about programming languages, that's pretty silly in my view.
What I was actually saying is that move semantics - and other features, that was just an example - are harder than they need to be in C++. Even once you've understood the feature, it still requires a constant mental burden - more to the point, more mental resources than it potentially could - and we all have finite mental resources. I had actually quoted the wrong part of your original comment, I had meant to quote this:
> I think to get good at a given systems language these days the effort is the same.
This is what I really disagree with, especially if you include not just C++ and Rust but other current and even hypothetical future systems languages. I'm not even saying that Rust is necessarily a better language than C++; I've not used it enough to know that. Just that, for that one language feature at least, Rust shows that it is possible to create a language with move semantics that are both easier conceptually and more ergonomic to use - it's a total win-win. For other language features, perhaps other languages show how it could be done more simply. There is definitely scope for systems languages that require less effort to learn and less effort to use after you've mastered them. (Rust may or may not be one.)
One should read a book—or perhaps even many books—on mathematics, or any subject, to call oneself proficient in the subject matter. In fact, if you do not read books on subjects , you are part of the code quality problem to begin with.
I have a maths degree, and I don't think this is at all true for programming.
Lot's of people read books on Object Oriented programming, and write absolutely awful Java Enterprise code. Or they read books on algorithms, and write fantastically efficient code that's completely unreadable.
And what's better than a book of best-practices? A compiler that enforces them.
I've never met a person with a maths degree who could write professional quality code, unless they also had some sort of other training.
Reading books doesn't automatically fix that, but it can help, just by exposing one to practices that one might not have come across in the kind of work that passes for programming for mathematicians, data scientist, etc.
> I've never met a person with a maths degree who could write professional quality code,
That's probably fair! There was a programming module in my degree, and it was awful. People were literally rote-memorising whole scripts for the exam (the CS department at my uni didn't do exams, but of course the maths dept did for their programming module).
I actually started programming before the maths degree, and have over 10 years of experience at this point. I'm not saying my code is perfect, but I imagine I'm incorporating many of what you would consider best practices.
I guess reading books can help. I've just not actually met any developers who write high quality code who learnt it from a book. They all learnt it from 1. Practice and experience 2. Colleagues in a work environment 3. Places like HN, which is excellent at exposing you to things you might not otherwise have come across.
I guess my criticism of C++ is over all the little details that can trip you up if you don't have a comprehensive knowledge of the language (undefined behaviour, use after free, etc). Most of them aren't tooo bad on their own, but there are just so many. Most other languages simply don't have these issues.
> Personally, I think what C++ offers is the same as Rust.
I see this pretty often and don't understand where this idea originates from. We have huge amounts of evidence to the contrary[1], that is generated by developers who likely have more experience with parallelism and memory than anyone in this comments section ("likely"). If the ultimate pedigree don't get it right, keeping in mind that they never have (apart from NASA who disallow threading and allocation), how is there this belief that it is at all possible.
It's the software equivalent of flat-earthing or climate change denial. We have abundant hard evidence supporting the inability for humans to reason about memory allocation, and absolutely zero evidence to the contrary.
Safe Rust eliminates entire categories of bugs. It doesn't eliminate all bugs (including vulnerabilities), so you do need testing, but it does a huge amount more than nothing (C++).
> I think to get good at a given systems language these days the effort is the same
Someone who has been proficient at something for a long time has a very difficult time getting into a mindset of a beginner. This is why I think you are completely wrong about this point.
> Rust has some nice-to-haves of course, but C++ has been able to grow up and evolve because of the rich feature-set the template meta-programming language has provided it.
C++ has not and can not hide all the hidden gotchas and complexities that it has accrued over the years.
>concurrency still requires the same underlying ideas to be understood whether C++ or Rust
But Rust will catch data race mistakes at compile time, which is very nice. That is a common mistake, and a mistake that can keep happening without you being aware for a long time.
It only applies to threaded code, as soon as IPC, or other kind of external resources enter into the picture there is very little that Rust's type system can do to help with.
There is zero misconception in the GP. Catching data races in threaded code is extremely valuable, regardless of whether it applies to IPC or not. Why? Because non-IPC multithreaded code is a thing. Libraries like rayon even make this sort of parallelism quite easy. With the help of the Rust compiler, it's all safe too.
Just because this doesn't extend to IPC (of course not) doesn't mean there is some kind of misconception here.
Misconception is that IPC and external resources is always left out of the message, so anyone without experience in distributed computing assumes guarantees that Rust actually does not provide.
I don't think you can have shared IPC memory in rust. You have to use message passing. Of course you are on your own completely, and there will be weird logic bugs when you mess it up. However, there won't be any memory corruptions.
Languages face enormous pressure to add features, just like any other product. There's a culture of picking languages by counting checkboxes - "if your language doesn't have X, it's Blub", where X can be anything from sum types to lazy evaluation - and little awareness that, like any other product, features have cost that's difficult to measure.
I think there's space for a language that is less, and is "done". When it's announced on this website, the comments will be "so it's Rust without X, or Kotlin without Y", but those comments will be in the same vein as "why not use rsync" and "less space than a nomad".
Java become that beast. They released Java 1. Then nothing changed until Java 5 which was huge release in terms of changes. Then again nothing changed until Java 8 and since Java 8 every release brings some major changes. So you could basically learn Java 5 and use Java 6, Java 7 for 10 years with virtually no changes, but now there are some changes every year.
I bet that many people won't like it and will stay on Java 7 forever.
While there are some changes every year, if you batch by LTS, the change window has shifted from 10 years down to 3. And I think it's a reasonable choice.
Not that many radical changes since Java 8 that I can think of. The language stays quite coherent so far. I find myself using “var” and switch with strings pretty often with Java 11.
I would expect Rust to do well in these sorts of areas involving systems programming and low level development including projects like Firecracker.
The features that are very compelling is that installation is painless, static linking is encouraged and used widely and its APIs aren't tied to any specific platform, thus is a true cross-platform language done right. It would be better to compare it to C++ rather than C since they both interface and surpass the complexity of the language.
While the language is mature, the author fails to mention that most of the crates ecosystem is immature and some are unsafe. Especially in the domain the author is working in, there's a degree of risk in importing some crates which can compromise the safety of the project (use at your own risk).
Cross-compilation is there in Rust, it requires downloading the toolchain for the specific platform and Go has it truly built-in, so I'll give them that one. Lastly, it may be possible to use a cross-platform GUI library like gtk-rs, but it isn't widely adopted unlike Qt, Electron and Flutter. The question around that is whether if Rust is suitable for that use-case? As many ideas and crates for Rust GUI development are coming, for now I'd say soon.
The author is certainly bullish on Rust in general and in low-level development and so am I.
> The author is certainly bullish on Rust in general and in low-level development and so am I.
While I share your sentiment, I've recently talked to about 5 people who write software for low-level, security relevant things in airplanes. Imho the best application for Rust one could think of. None of them had even heard of Rust. But this is highly anecdotal of course.
There's a number of reasons. Anything in a regulated industry like that has to have everything approved by regulators. The whole compiler toolchain, all libraries, blah blah. All have to be certified versions before you can use them. You can't just pick up github latest compiler and expect to ship a safety critical device with it. Coders in regulated industries may have never even heard of github, much less rust. Different mindset.
Second, it's typically not x86 architectures. They'll have a specific CPU or SOC that they use, from a specific vendor, and other specific vendors that provide the (certified) compiler and possibly RTOS that is used to target that CPU. Those vendors have decades of investment in their C code. Some small change in the asm rust produces vs c (and I'd expect the difference to be much more than a small change) could just break everything in a finely tuned RTOS.
Third (and last one I can think of offhand), there are tons of things like static analyzers and such that can be used against C code and have been developed over decades to find many of the things Rust has built in. They're not as good as Rust at some things but better in others.
Oh, fourth, these companies already have huge codebases and libraries they've already written and are used to. Rewrites / refactors are less common in regulated industries because of all the documentation they require.
Okay, fifth, and perhaps the biggest one, at least in my experience, we didn't ever malloc / new in the app code anyway, because of the potential for out of memory errors. We created a couple big buffers up front and used those exclusively. So rust's ownership model wouldn't even help there iiuc. I imagine most safety critical devices are similar?
None of this is to say that Rust will never be useful in a regulated context, but it has a lot of hurdles to jump.
Rust’s ownership model isn’t useful just for heap-allocated things; it’s useful for all kinds of other safety guarantees, such as preventing unnecessary mutability.
To elaborate on this, nothing about ownership or borrowing has anything directly to do with heap or stack allocation. Allocation fits into the ownership and borrowing rules, not the other way around.
Nice. Is it frequently used in contexts outside of allocation? I assume allocation is the primary use, or at least it's the most talked about, but I have never done anything complex in rust.
One example of where it's used: to enforce correct usage of mutexes. Rust's `Mutex<T>` owns the data it protects. When you lock the Mutex, you get a reference to the data inside it, but it's impossible (a compile-time error) to store a copy of the reference beyond the point where you release the lock.
Cool. So the fifth point is largely nullified, which is a big one -- that the features of Rust would at least be useful in typical safety-critical code.
The most common AFAIK would be when using iterators. An iterator borrows (or mutably borrows, or takes ownership of, depending on how it's called; iter() vs iter_mut() vs into_iter()) the original collection, so the original collection can't be modified while being iterated. This means no "ConcurrentModificationException" or similar (or worse, silent corruption) can happen.
"Linear type systems are the internal language of closed symmetric monoidal categories, much in the same way that simply typed lambda calculus is the language of Cartesian closed categories. More precisely, one may construct functors between the category of linear type systems and the category of closed symmetric monoidal categories." (Wikipedia)
So now the only question is where closed symmetric monoidal categories are applicable :P
I expect people who write low-level, security relevant things are in rather short supply and so they have a pretty hefty workload. Unfortunately, that leaves little room to explore and research new technologies. Moreover, very, very few people look to improve their efficiency by learning entirely new tech stacks.
> Moreover, very, very few people look to improve their efficiency by learning entirely new tech stacks.
I firmly believe (against the common HN sentiment) that 99% of the developers do exactly 0 exploration of new technology and they couldn't care less (which is their good right). Also not everybody sits in a fancy startup in the Bay-Area and has even the skills or time to adapt. Most developers sit in a big company that has been doing their stuff for 10+ years and are not very susceptible to change. As an anecdotal example, I recently talked to a company that could not figure out why they could not attract talent. They claimed "we are so innovative, we even adapted PostgreSQL". Before that they used csv as their "database".
Isn't your post a bit controversial? You're writing that 99% of the developers don't care about new technologies and then that some company who adapted PostgreSQL can't attract talents.
Or they are not happy with those 99% of candidates?
Okay when I reread it, I see your point. I used their wording, in fact they can't attract anyone but they used the word "talent" for anyone that can code.
> I firmly believe (against the common HN sentiment) that 99% of the developers do exactly 0 exploration of new technology and they couldn't care less (which is their good right).
Is it though? It certainly is and should be your right to use whatever tools or language you want for your hobby projects on your own time, but in a professional setting or for production products, I think there's an argument to be made that software developers should have significant constraints on the sorts of tools and languages they should be permitted to use (domain-specific of course).
You may be interested in the 'Sealed Rust' initiative by Ferrous Systems GmbH, which does aim to introduce some version of Rust to traditional 'safety-critical' domains - see https://github.com/ferrous-systems/sealed-rust/tree/master/p... for details. A lot of hard work is involved.
That is I assume a heavily regulated and standard-compliant industry. You can't just pick any language you want. Rust is obviously not certified for such use and I would assume they can only pick something that is.
Them never hearing about Rust does not really speak to anything.
These industrial projects tend to have a massive amount of inertia (which is probably a good thing overall, you don't want people to migrate your plane autopilot to React.js because it's fashionable) so it doesn't surprise me that experienced devs who only deal with these codebases and don't lurk on HN aren't exposed to Rust.
If your used to web technology standards Rust being stable since 2015 makes it a mature tool. Almost outdated really. On the other hand if at your work C99 is still new and shiny Rust is basically brand new tech which might be worth looking into in a few years. People who work on long term, critical projects want boring, reliable tools.
Sorry, but Rust is NOT ready for embedded work, yet. The Rust Embedded guys are making great strides and doing great work, but trying to shove Rust into a critical infrastructure piece right now would be counterproductive, anger a lot of people and likely set Rust back.
As much as I think Rust is a good thing, the breakpoint is when the big game development companies start using it. That will tell everybody that Rust is "good enough" for actual production work.
At my place of work we used to use Ada for most projects. In more recent years a move to C/C++ has been made.
Now the trend is towards model-based design, where toolsets such as Simulink or SCADE provide certified code generators or code checkers to remove most of the manual element of generating code from design.
Would love to be able to use Rust, but will need to wait for a COTS tool chain that provides certification evidence for DO-178C Level A. Sealed Rust looks to be going that way, but it's no small task.
The best language for systems like that is ADA, and it's commonly used in avionics and related systems
I like to think of Rust as "memory-safe C++ 2.0", but ADA was designed for general robustness. Types like integers can have a valid range assigned to them, etc...
To be a truly safe language for hard realtime or embedded purposes, Rust would need a lot of features added. Things like guaranteed maximum stack allocation for function call chains, a proper allocator system, dependent types, and integration with proof systems.
Some of that is partly there or being worked on, but there's a lot of gaps.
I also regularly see Rust releases with unsafety that can be triggered by safe code. The invariants are being checked (mostly) by hand and mistakes slip through.
Worse still, Rust has a tiny standard library, leaving the bulk of what's needed for real software up to the "community". Unfortunately, crates have highly variable quality (coughactixcough) and it's just not a good idea to build something like a jumbo jet's avionics based on L33tHax0r's AwesomeCrate, if you know what I mean.
This all goes back to an opinion I've had about programming language design for some years now: It has one of the highest returns on investment of any human endeavour imaginable. It's roughly comparable to the development of a new vaccine.
For every feature or quality improvement in a language and its standard library, tens of thousands of programmers benefit, and then billions of end-users benefit indirectly from their improved productivity and higher product quality. Conversely, low investment in the core language translates to enormous inefficiency as many developers are forced to reinvent the wheel. The users then suffer from the square wheels or the round wheels that sometimes fall off.
Rust is a low-investment language. Its standard library doesn't hold a candle to something like the JDK or the .NET Framework. For crying out loud, it doesn't natively do: dates, times, guids, decimal/money, TLS, HTTPS, XML, i18n, databases, or even half of what was included in .NET v1.0 back in 2002! They're working on async, which in a basic form was stable in .NET v1.1 (2003) and fully fledged in v4.5 (2012), three years before Rust v1.0!
All of those features are provided by crates. Some of which are maintained by one random guy. From Russia. Or China. Or wherever. Some of which have unnecessary unsafety. Some of which have poor design, or don't interop well. Or worse, there's several competing crates and now you have to pick. Or you pick the one that looks right and it's just a wrapper around the C++ library. (There was some guy who was just spamming these out and squatting on the "crate name", forcing the real Rust crates to use less discoverable names.)
I like Rust, I do. But answer me this simple question: How do I process XML with Rust?
Which of these many crates is the one that's equivalent to System.Xml in C#? Which one can handle encodings other than UTF-8? Which one can read and write? Which one can handle entities? Which one can validate? Against XSD and DTDs? Do they do XPath? XSLT? Yes? No? Maybe? Partial? Who the fuck knows?
Back in the year 2000 I was using Apache Xerces from C++ and then 2 years later with C# v1.0 I was using System.Xml and I could do everything. It's 2020 and I don't think half that list is available from Rust, with or without third party crates.
Even if I were to pick one that works now, who's to know that it wasn't written by some university student only to be dropped on the floor and become unmaintained when he gets a Real Job?
> Even if I were to pick one that works now, who's to know that it wasn't written by some university student only to be dropped on the floor and become unmaintained when he gets a Real Job?
You do the same thing you would do in any language where you depend on third party code: you do your due diligence. If no XML library meets your needs, then write your own. If that's not an option, then the Rust ecosystem isn't ready for your use case. Pretty simple. But it's ready for a lot of other use cases.
This is my point, and I know it sounds absurd, but think of it in terms of a business deal, or a product, or any "transaction".
If you have two "individual entities" making a roughly equal transaction, such as a business merger, then I would agree: if one of the two parties is concerned with something, it's up to them to do their own due diligence.
But if you have a 1-to-many deal going on, this then becomes unbalanced. Imagine the "1" is Visa, or Mastercard, and the many are their millions of credit card users.
Imagine for a second Visa or MC saying: "It's up to the credit card holder to verify the cryptographic security of any POS devices, as well as the security of the merchant's banking IT system."
That's insane, right?
It's an extreme example, but it's the same concept as a language's standard library. Not every developer is in a position to evaluate the quality of a bazillion crates, their transitive dependencies, and all possible future interactions between them.
If some developer starts using "xml-rs" or whatever, and 3 years later the diesel crate uses "rust-xml", then he's in for a rewrite through no fault of his own. There is literally nothing he could have done to protect himself from this eventuality.
Now take this a step further. If I'm a consumer of "some developer's" app, which I run on my PC, how do I as an end-user have any confidence at all that it's using secure, maintained crates?
What if I use multiple rust apps? Do I have to check every one, every month, and all their crates, and all their dependencies?
With something like .NET or Java, even in a sysadmin role, I can be reasonably confident that a) it's a secure framework, b) all parts of it, c) I can patch it myself it isn't, d) and I only have to check one framework for many apps.
This is like the most fundamental theory of the efficiency of transactions. M:N is inefficient, M:1:N is vastly more efficient.[1] That's why we have Uber, and governments, and Ebay, and Amazon. That's why we have banks. Because instead of standing on a city street corner asking random people for loans, there are institutions that take responsibility of verification for all parties and provides a guarantee of future safety.
With languages like Java or C# I have to trust only Oracle or Microsoft. Literally just two evaluations, and I'm done. I pass on Oracle and accept the risk of Microsoft, and then get on with my programming task. Easy as pie.
Languages like Rust and JavaScript are passing the buck. They're the BitCoins of banking, and about as popular or effective. For every transaction it's "you have to do your own research", but that's literally impossible for most users, so they get their wallets drained. Ooops. They should have known better, right?
Or they should have just used a bank.
[1] Consider that for popular languages, the ratios of the language designers : programmers : users is roughly 100:100K:5B, or 1:1K:50K, give or take an order of magnitude depending on the language. The "1" on the left passing the buck to the "1K" in the middle is one thousand times less efficient than having that work being done on the left. So every time I hear something like "Rust shouldn't include X, use a crate", it makes me grind my teeth a little.
You've spent a lot of words ranting, but you're not telling me anything I don't know. I specifically did not engage with most of your rant in your initial comment because it seemed like you weren't interested in discussing trade offs. It seems like you still aren't. For example, I don't see anywhere in your comments where you acknowledge the benefits of a small standard library. Instead, all you do is lament the costs. Which is fair, and is why I answered your question matter-of-factly.
> If some developer starts using "xml-rs" or whatever, and 3 years later the diesel crate uses "rust-xml", then he's in for a rewrite through no fault of his own. There is literally nothing he could have done to protect himself from this eventuality.
This kind of thing happens regardless of who builds the libraries.
> With languages like Java or C# I have to trust only Oracle or Microsoft. Literally just two evaluations, and I'm done. I pass on Oracle and accept the risk of Microsoft, and then get on with my programming task. Easy as pie.
Last time I checked, both Java and C# have rich open source ecosystems. So yeah, if you don't use anything from that ecosystem other than what is developed by Oracle or Microsoft themselves, then sure, you'll do a lot less due diligence there. You don't really need to write hundreds of words to make that point. If that's what's important to you, then sure, there's a lot of stuff in the open source ecosystem (including probably Rust, depending on what you're doing) that you just won't be able to take advantage of. Which is fine, not everyone has the same risk profile.
Technology like Rust doesn't have to be All Things to All People at All Times. In order to hit your use case, it probably needs a company or an established/respected organization to start taking responsibility for maintaining a lot of code. I don't see that happening any time soon, but it's not like it's fundamentally impossible for it to happen. I think it's more likely that others will figure out how to adapt their risk profiles, personally. But that's just speculation.
> acknowledge the benefits of a small standard library.
I don't though. The perceived benefits of a small standard library exist only in circumstances that then cause the language ecosystem to fail to meet my requirements. These are also the requirements of many similar people in similar shoes, and also include the requirements of specialist usage such as avionics and the like.
The advantage of a small standard library is that it keeps the workload for the language designers manageable when they don't have the manpower to keep a large library properly maintained. The failures of Python's batteries included was IMHO caused by the Python team not having corporate backing and proper funding, not an inherent issue with a large std lib per se.
Rust is a low-investment language like Python: they're not investing sufficiently on the "left hand side" of the transaction. The advantage of a lean std lib is only to them, a small number of people. This is not an advantage to me, the consumer of the language.
Now, I get it, Rust was born out of Mozilla, and they're not Apple or Microsoft or Google. But that's the problem. They need to be picked up by a mega corp like, say, Amazon's AWS team to get the funding they need to do things properly.
> Last time I checked, both Java and C# have rich open source ecosystems.
They didn't for the first decade or two of their existence, and they both owe their large, cohesive standard libraries to that era.
I lament the open sourcing of the .NET framework, because the quality and cohesiveness has plummeted. There are glaring inconsistencies and bugs being thrown over the fence that would have never have made it past the kind of formal code review that occurs at only at corporations.
> This kind of thing happens regardless of who builds the libraries.
That's just not true. If a central, organised, well-funded body writes the standard library, then there is a consistency and cohesion that can never be achieved with an open source community. This is literally the Cathedral vs The Bazaar.
If I use System.Data.SqlClient then I can be confident that it will use System.Xml to return XML fields stored in a database. It'll never use BobsXmlLib or something random like that. That's the benefit to me of an architecture versus an evolution.
> Technology like Rust doesn't have to be All Things to All People at All Times.
Right now, it's not much to not many people. Its popularity is growing, sure, but it has enormous gaps that normally would be filled by the core language team. Some gaps might be filled by the community, but it's going to take a long, long time. These gaps stop many people using the language for production development.
I'm not making up that XML library scenario as some sort of exercise. This is a real problem that I have, in Rust, right now. I wanted to translate a pure C# XLSX parser I write for PowerShell into a Rust library and tool vaguely like "xsv". Something fun to do while on the coronavirus break.
I got bogged down just trying to find an XML library that can handle OOXML in a standards compliant fashion, and is popular enough to be maintained going forward, and is consumable downstream.
In C# this was trivial, even using the API that dates back to the v1.1 days. It would have been trivial in Java and C++ as well. In Rust... ugh. I'll revisit this next year, see if the XML crates have grown some features, performance, and stabilised a bit.
When step #1 is to ponder over comparison tables like this where the only full-featured crate is written in C, not Rust, I lose interest, and this is for something that's just a hobby: https://github.com/RazrFalcon/roxmltree#alternatives
PS: I see comments in crates like this all the time: "vkxml has been made for use with serde-xml-rs and, because of some quirky attributes required by serde-xml-rs, most likely will not work with any other serde xml parser." From: https://crates.io/crates/vkxml
> These gaps stop many people using the language for production development.
Every language has gaps. Rust had a lot more gaps two years ago than it does now. It's called progress. I personally don't see your particular use case being addressed any time soon. Oodles of people don't need everything developed by a single entity. The success of the Javascript ecosystem should at least make that clear. I freely recognize that some do though.
> I'm not making up that XML library scenario as some sort of exercise.
I didn't say you were? Your comments are so bloated and you keep rehashing every single point.
> I got bogged down just trying to find an XML library that can handle OOXML in a standards compliant fashion, and is popular enough to be maintained going forward, and is consumable downstream
We already covered this. Why do you keep repeating it? I literally addressed this in my very first comment to you. And you're still complaining about it. Why?
I already told you: Rust doesn't have to be All Things to All People at All Times. Maybe instead of focusing on responding to that with some quip you think is clever, you could recognize it for what it is: an acknowledgment that Rust may not be ready for your use case. Why in the world is that so hard for you to accept? Like, this isn't rocket science. If you need a high quality XML parser, and Rust doesn't have one and you can't write one yourself, then the answer is pretty simple. The thousands of words you've had to write to express this point---and then repeat over and over---is just absolutely baffling.
If instead you needed, say, a high quality JSON parser, then we wouldn't be having this conversation at all. You probably would have used `serde_json` and you would have been as happy as a pig in shit.
> The advantage of a lean std lib is only to them, a small number of people.
Definitely not true. In the Python ecosystem, the mindshare split between things like the standard library HTTP client and more popular third party projects like `requests` impacts everyone.
> This is not an advantage to me, the consumer of the language.
Of course it is. As a member of the Rust library team, I can tell you with absolute confidence that we had three choices given our timeline: 1) build a small high quality standard library that serves as a substrate on which high quality third party crates could be built, 2) build a large but low quality standard library that would likely have large pieces of it deprecated in the future as better third party options became available or 3) don't ship at all. Neither (2) nor (3) would be good for users.
Now we could reverse course at this point and start bringing more into the standard library. I personally don't see that happening because our current strategy is working pretty well (bloviating HN commenters not withstanding). But sure, it could happen. You may be bloviating, but as I already acknowledged, having a big standard library definitely has its own benefits. You're right about that. Being able to trust one (or fewer) entities is a worthwhile proposition. But at this point of time, if that's what you need, then Rust isn't a good fit. I said this before too, and I don't know why you're still giving me shit about it. Why can't we be two reasonable people that accept reality and understand trade offs? "Oh yeah, that makes sense. I'll check back on Rust in a couple years then" would be a great reply. But no, instead I get speculation, rehashing previous points, more complaints and snide back-handed quips. I don't know why. Maybe you just needed an outlet to rant. Well, I'm not your personal punching bag, buddy. Back off.
"Static linking and cross-compiling are built-in."
I found out that the above is not completely true. According to the official Rust documentation, only Rust dependencies are statically linked by default. But compiled Rust programs depend on more than Rust dependencies, as they also use shared C libraries. The capability is built-in, but it's definitely not straightforward.
"Pure-Rust dependencies are statically linked by default so you can use created binaries and libraries without installing Rust everywhere. By contrast, native libraries (e.g. libc and libm) are usually dynamically linked..."[1].
I did a small experiment to make a from-scratch Docker container with no dependencies and a single Rust binary and found out that I could not do that without jumping through some hoops[2]. I had to have it include a minimal implementation of libc called "musl". See my write-up here[3]. If anyone has found another way around this I would love to hear about it and make a correction in my write-up.
To be honest you're overcomplicating the whole thing by involving Docker in the build process. Assuming you have Rust installed (which is only one `apt-get` or `curl` + `sh` invocation away) and your project does not depend on any external C libraries (except libc) then building a fully-static Linux executable is as simple as those two commands:
There are no hoops to jump through here. I can totally understand using Docker for something which can be a pain to setup a toolchain for or install all of the dependencies for, but for the Rust compiler I see very little reason for running it inside of a container except in a very few very niche cases.
Hey, thank you for the feedback! If I had known about that `rustup target add` command for musl on linux it would have saved me a ton of time.
I agree with what you said about docker. I mostly involved docker as an academic experiment. Since I usually write in Python, I've never really had the joys of a statically linked binary with no outside dependencies. Now that I've been playing with rust, I get to finally experience that, and to prove that, I wanted to throw it into a container where it was starved of all other dependencies and see my binary work by itself.
As for incorporating the other docker container that imported musl for me and added it to my binary during the build, that was all I could really find to get it done easily (lazily). Now I have your suggestion which is much easier.
Anyway, thank you very much! It's cool talking to a Rust core developer.
At one point docker was the only way to do this, I think. So if you start searching the internet for how to do static linking with musl you're very likely to find links to the docker'd approach.
Edit: I mistook the child comment of my comment as one from steveklabnik. Sorry, was not sarcastically calling you a Rust core developer :-). He really is one, see his profile.
" I've also found that programs seem more likely to work on their first run, but haven't made any effort to quantify that." Had the same experience with Modula-2, once I got all the compile errors cleared.
I think there is a real need for the current generation of business programmers to start focusing more on safety and rigor as they transition into IoT. For instance, I don't see most programmers I've worked with capable or will to use enough rigor to make autonomous cars safe. All these programmers have generally been slinging java for years, without even much effort to exposed themselves to other languages unless there is a business requirement to do so.
So the concern I have with Rust (as a newbie) is it isn't consumable enough to have it's value adopted by most business programmers out there, and I've wondered if Ada would serve that role better because it seems generally easier to grok percentage wise.
Thanks for sharing. Interesting stuff, esp since I consult for Ford Motor. I've always been a fan of DbC since introduced by Eiffel, and wrote a clunky but workable DbC module in FPC.
Whatever bad things you see in this world - it's done by java developers. Go, scala, kotlin, JavaScript, python developers would never do anything as bad as java developers. I see that a lot of people stick to that mantra.
Java is used on projects with more scale (business process complexity) than the other languages you list, though.
There are reasons for the things that are done in the Java community. We might not like the result, but there are reasons, and most of them have to do with scale and process at scale.
There are other ways to build out scale, but they may require different kinds of organizations to build them, with a different mix of people. Big Java projects tend to be structured to make efficient use of developers with 0-5 years of experience, because at scale those developers are reasonably easy to hire. Those developers need to colour inside the lines (=> framework, we'll call you, follow the patterns, don't invent abstractions), create code which can be tested in isolation (=> inversion of control), and have their output glued into position (=> dependency injection) in a much larger solution. Most of the structure of the Java ecosystem follows from this.
Not suggesting that it's the language. It's the language that is popular for large-organization commodity programming. Agency people who want well paying jobs with not necessarily a lot of interest in software development itself.
I’ve been immersed in C++ at the office for over a year. It is a big language, but like all big languages one ends up finding the parts of it they need to use and get really good at it.
The more languages I learn, the more I dislike language zealots. They’re great to leverage when learning, but they’re blinded by their own opinions to see that they’re making mountains out molehills.
I think part of the tension is that some people are in it for a better hammer, while others are in it as an end unto itself. PL is just plain interesting as a hobby interest.
I find it interesting that both Go and Rust, in practice, step on one of their relative foundational concepts. (2010: https://blog.golang.org/codelab-share, for Go, and I suppose don't need to cite "safe system code" for Rust).
For example, Go community could have stuck to their guns about building server side concurrent code only using channels and value objects, performance impact be damned, and we would not have high performance server code written in Go (using locks and shared memory). And as OP points out, "system level" code written in Rust likely will have (possibly opaque) unsafe code segments.
Intent here isn't to rag on either language. It's more musing out loud about the impact of conceptual consistency in course of development on product success: is it a (practical) mistake to insist on it? Based on Go and Rust teams' decisions to date, it seems it pays to be pragmatic. (Or are we simply throwing the towel in too early?)
I'm not entirely sure. I think this line of inquiry is interesting, but I'm also not entirely sure that you're declaring Rust's original principles correctly. That is, the safe/unsafe dichotomy was always there. It's impossible to build real systems without it, and so I think that's why people gloss over it a bit.
(I ~think Go also always had facilities for sharing memory as well. Don't remember.)
The issue appears to be:
- "It's impossible to build real systems" according to pure conceptual CS models,
- or, our conceptual models are not sufficiently strong and expressive, and fall short bridging the gap between symbolic and physical realm, and these gaps manifest in code.
- or that it's impossible to practically realize more complex conceptual models, and that even if possible, they would not be accessible to the general programming community.
- or we're building on poor legacy foundations which will dog us until we finally rethink approaches from ground up.
Recently we've added support to curate the Rust programming remote jobs at Remote Leaf[1]. We've seen a some surge in the Rust remote jobs in the recent months. Maybe because it's being accepted by a wide variety of developers and also hiring companies started using it?
I see a lot of people comparing Rust favorably against C++, largely due to C++'s complexity. And yet at the same time, the Rust dev team are changing and expanding the language very quickly.
Is there any plan to feature-freeze Rust?
Otherwise it'll just become C++ 2.0 - a giant mass of features that are progressively designed to replace each other, until the language becomes too complex for any one person to master.
Notably almost none of Rust's new features are replacing old ones. Most of them are either opening up new capabilities (e.g. async-await), or making existing mechanisms more general/flexible (e.g. const generics, GATs).
It might accumulate cruft eventually, but I think it's at an inherent advantage over C++ due to its heavy functional influence, which is all based on math / PL theory. On the other hand C has always been a quick-and-dirty language, and C++ inherited a lot of that legacy.
The macro system also helps a lot, as features can be prototyped as macros, and only stabilised once the design has been iterated and used. More niche features can stay as macros in 3rd party libraries.
It also has years of crucial hindsight in language-theory. OOP and FP are both fairly mature at this point, and Rust started out of the gate by elegantly weaving them into a single coherent model, compared with C++ which started with neither (C) and had to monkey-patch both of them on, over the very decades when the ideas behind them were most actively evolving.
Good to know. Worth pointing out that the C++ standards committee also almost never breaks backwards-compatibility. Instead, they just introduce The New Way (and then The Old Way becomes just one more trap).
If you're in a position to affect this for Rust, it's worth thinking about. Give the documentation and libraries a few years to catch up, so that we don't have to chase a moving target. Maybe focus more on compiler and toolchain optimizations, and less on language features.
It's a little rough to play in an ecosystem where the language itself effectively has a nightly/unstable variant.
That's very much been the theme of 2019 and 2020. 2019 really only saw the introduction of async/await as a new language feature. Almost all other additions were smaller things, like "this feature expanded a bit" or "this edge case is no longer an edge case." There was only one instance of something straight-up replacing the old thing; mem::uninitialized by MaybeUninit.
My take is that it's an S-curve, which we're approaching the end of. Rust had a lot of ground to cover to include all the features people expect in a modern language. I think it very nearly has all the important ones at this point.
I've had a much shorter frame of time with Rust and I have to say I really enjoy the learning experience, and that is to say that learning the language is a pleasurable experience. Other languages with larger more established communities are often unable to replicate this same feeling of inclusion and value as a member of the ecosystem of contributors.
Hi, I'm the post author. As you guessed, I'm South African. I worked at Amazon in Cape Town in the early days of EC2, and live in Seattle now. A lot of core development on EC2 (and other AWS products) still happens in Cape Town.
Everyone always seems so positive about Rust. I'd love to try it for some personal projects. Are there any downsides beyond the niggle the author mentioned?
Are compilation speeds an issue for anyone?
Is there much that can be done to improve this? (Both in Rust itself and at a developer level; presumably a faster dev machine helps?)
I have a 50k+ lines of code project which usually recompiles after a change in ~2 seconds in release mode.
There are many tricks which can be used to improve compile times to the point that even on medium-sized projects the compile time is not an issue. But you need to keep a certain discipline to adhere to these.
1) Use LLD instead of the system linker.
2) Don't add dependencies willy-nilly. Especially for trivial stuff which you don't need pedal-to-the-metal optimized. (e.g. do you really need to add that 4000 lines long SIMD-optimized base64 encoder/decoder, or can you live with a naive 10-lines long version you can write yourself in a few minutes?)
3) Feature-flag gate dependencies/features not necessary during development. (e.g. do you actually need HTTPS support during development, or can you test your webapp on HTTP and only compile-in the TLS stack for production deployment?)
4) Avoid heavy dependencies. (e.g. there are some popular web frameworks for Rust which have over 100+ dependencies by default; if you pick such a framework then your compile times are obviously going to be very heavily affected)
5) Use dynamic dispatch (&dyn T and Box<dyn T>) instead static dispatch (impl T) when accepting generic arguments in cases where you don't need pedal-to-the-metal performance.
6) If you absolutely need to use static dispatch purely for ergonomics (and not because of the performance) then create two function - a dynamically dispatched private one which accepts a &dyn T and contains the actual functionality, and a public one which accepts impl T and is a one-line wrapper around the private one.
7) Don't use #[inline] annotations if you don't absolutely need them.
> 6) If you absolutely need to use static dispatch purely for ergonomics (and not because of the performance) then create two function - a dynamically dispatched private one which accepts a &dyn T and contains the actual functionality, and a public one which accepts impl T and is a one-line wrapper around the private one.
Would you really recommend this as common advice? It seems like not a good idea to me. If all you cared about was binary size and compilation speed, maybe, but not otherwise. Same with blanketly recommending use of &dyn T instead of <T>. There are other problems with dynamic dispatch in rust, namely it's kind of a pain if you need to `+ OtherTrait` with it.
Yes I would. In general people tend to overuse static dispatch even when it's not really necessary. Of course the issue is a little bit more nuanced than "always use X unless Y" and there are tradeoffs in play here that need to be balanced.
For example, if your function is really small - yeah, it's probably fine to just use static dispatch. If you're writing a generic data structure - you most likely also want it to be a statically dispatched Struct<T>, but with a healthy dose of #[cold] annotated non-generic functions for the cold paths. However, let's say that you have a function that accepts a filesystem path and loads a PNG from it - you do not want the PNG loading code to be 1) duplicated in every compilation unit (compilation time bloat), and 2) monomorphised three times just because you passed an `&str` once to it, a `String` another time, and a `PathBuf` yet another time (compilation time and executable size bloat), so you definitely do want dynamic dispatch here (at least under the hood with the two function trick).
I do think the default should indeed be &dyn T and you should only go for impl T when you can actually clearly substantiate why you should use it, instead of the other way around which is the default now in the Rust ecosystem. (Which is how you end up with 20+ second edit-compile cycles one of the sibling comments mentioned.)
> you do not want the PNG loading code to be 1) duplicated in every compilation unit (compilation time bloat), and 2) monomorphised three times just because you passed an `&str` once to it, a `String` another time, and a `PathBuf` yet another time (compilation time and executable size bloat), so you definitely do want dynamic dispatch here (at least under the hood with the two function trick).
It appears that there is some work to (partially?) address this problem in PR #69749 [0], which, from what I can tell, aims to avoid instantiating functions on unused type parameters.
It's not a perfect solution for your case, but should be fairly easy to take advantage of.
Could a proc macro be written that uses dynamic dispatch in debug and static dispatch in release? That would be optimal for dev compilation speed and binary speed, at the cost of binary size. It seems like a pretty good tradeoff for most cases.
IIRC I think there was a crate with a procedural macro like that which did something kinda similar to this. However currently that isn't really optimal for quite a few reasons: 1) having a procedural macro by itself pulls in extra dependencies, 2) it introduces extra work for the compiler so it does negatively impact the compile times (the procedural macros don't operate directly on the AST, so they have to parse the token stream into an AST, process it, serialize it back into a raw token stream, and the compiler has to parse it again), 3) Rust's current procedural macro machinery doesn't yet support emitting proper error messages from a procedural macro, so if you'll get an error or a warning from a piece of code generated by a procedural macro it will just point to the #[name_of_the_macro] annotation instead of the actual location where the issue originated from.
Your practice deviates significantly from what other people are doing in the ecosystem, but if you like it for your personal projects, more power to you.
> Your practice deviates significantly from what other people are doing in the ecosystem
Yeah, I know, that's what I said in the last paragraph! (: And that's also why I enjoy a 2 second recompile times (I could probably go even lower with some more refactoring) in release mode (debug mode is too slow for me at runtime) for a medium sized project while other people are struggling with half a minute compile times on projects less than half of mine's size.
I'm not saying that what I wrote is the ultimate panacea, but it's certainly one way of tackling the problem of compile times and executable bloat, and it'd certainly be nice for more crates in the ecosystem to take this issue more seriously. (You can only do so much by making the compiler faster.)
I’m writing a largish (~15k) hobby program in Rust. I’ll focus on negative points, there are many positive.
Compilation time is terrible, I need something between 20 and 40 seconds for edit-compile-cycle (MacBook Pro 2016, i7, 16gb ram) which kills productivity for me especially since as a hobby I tend to have small windows. I should probably dedicate time to investigate how I can speed it up (split it in more crates than naturally required, refactor how tests are written, etc) but this is a turn down by itself.
Productivity is still low. Even after writing so many lines I still can’t feel productive; I still face problems writing code that compiles, and am forced to workaround issues with partial borrowing of structures, or similar issues. This happens anytime I need to do something “new” form an architectural point of view (writing a new “component”), or if I attempt something “smart” (eg, trying to refactor to reduce duplication). This matches what this article said: Go is far more productive for me. If the architecture is fixed and I just “write the code in the right place”, then I can get decent speed (modulo the compile time issue).
> Compilation time is terrible, I need something between 20 and 40 seconds for edit-compile-cycle (MacBook Pro 2016, i7, 16gb ram)
A trick available on newer releases of rustc is "cargo check", that stops the compilation before the code generation (which is the slow part). If all you need at the moment is to see whether it would compile (no syntax errors, no wrong types, no borrow check errors), it can help a lot.
I also work on a ~15kloc rust project. I use cargo check all the time. Still extremely slow, not to mention that my whole system can be locked up by rustc, which really kills development.
I'll probably find a way to cgroup rustc just to deal with this issue.
The funny thing is I paid about the same from my rig that people pay for a Macbook Pro. It is not really unaffordable and you get a great development machine.
Of course nowadays the combination of Emacs and rust-analyzer gives you a very fast and nice workflow. That and one terminal running
It's possible, but my laptop runs fairly cool - it's elevated on a stand and is 18 inches, with a lot of ventilation. Of course it doesn't compare to a desktop system, but even still, this thing is hardly weak. It's a bit absurd to require 16 cores and 32GB of RAM to be productive.
If you divide the project up into sub crates, using a "workspace" you can improve compile times. If you're not using that feature already. Not saying that compilation speed isn't an issue, but that's one thing that can help.
Sibling comment mentioned `cargo check` which is pretty fast too. If you use an editor with a decent rust plugin like vscode (using RLS or rust-analyzer). It will run `cargo check` by default for you.
If you're trying to do something very specific that doesn't align with Rust's ownership/memory model it can be quite challenging.
For example, I wanted to make a skiplist library with features not present in other crates. Trying to go the totally safe route was a cognitive overload, and trying to express complex invariants in that form was painful. Completely doable though, if you're willing to learn it all.
I eventually settled on a much more unsafe, but very granular approach that's probably not going to get a good rating with cargo-crev.
I want to echo this point. I tried to write an implementation of an automata-based algorithm in Rust, and my main takeaway was that Rust really doesn't like linked lists. There is currently a popular book [1] that focuses entirely on how one would write such a thing.
That's not to say that it can't be done - I believe there's a crate implementing the same algorithm I was going for. But my main point is: if you see "systems programming" and think "oh, like C", then you are going to have a difficult time.
Totally agree. Anything that touches multiple owner territory gets ugly pretty fast. You end up with a variety of Cell objects, Rc::weak pointers, etc. Again it's possible but not trivial to figure out.
Now I wish I read that resource before writing the library, as I would have saved a considerable amount of time (~30h / 2k lines of rust). Just using raw pointers (NonNull) everywhere meant comments like this [0] but was far more elegant and understandable overall.
It was spooky seeing rust segfault before everything was ironed out + valgrind + miri tested. But it's also really nice to have the borrow checker figure out sketchy lifetime extensions / iterator invalidation for you.
If you're working on a project like that, how bad (or even possible) would it be to just stick unsafe around almost everything, if the alternative is to use C++? I mean, unsafe Rust is still no worse than C/C++, right?
First, if the whole point of using Rust is "memory safety", then writing everything unsafe would be missing the entire point. I'd rather do it well at some time in the future, when I properly understand the language.
Which leads me to my second point: at this point, I'd rather do it in C (which I did). I haven't done much C lately, but I'm at least aware of its strengths, weaknesses, and best practices. Better to use an old tool well than a new one badly.
More expert Rust programmers can probably give a better answer.
Yes, I can hardly use my travel netbook for Rust hobby coding as pulling new dependencies will keep me waiting for 20 minutes on average.
My hobby coding on C++ is never so slow, thanks to binary dependencies. Even source based package managers like vcpkg allow to stage binary libraries, so you just have to compile the world once, and can share with the team, or whoever you feel like it.
Then C++ compilers, not only support incremental compilation (which Rust also supports), they also have incremental linking, even at function level.
So unless you are doing some crazy metaprogramming, even C++ can be faster to compile than Rust.
What they can do is having cargo support binary libraries, and also have some form of JIT/AOT integration, like many other toolchains allow for.
Interpreted/JIT for development and AOT for release mode, for example.
Disclaimer: I'm a big fan of Rust and I'm very hopeful about its future. This is why I spend a lot of time studying it.
As someone who is learning and comparing the language there are things that I think definitely suck about Rust:
* Syntax is ugly, unclean and overloaded. There are things like the ? operator which seem to have no purpose other than sugar in the sense of "you write a bit less code", which I find to be useless, obscure and even had restricting implications on async/await. Also the whole language suffers from being a weird mix of C-like statement syntax and FP-like expression syntax, which takes time to get used to. Function signatures are both verbose and arcane. I wish Rust just admitted early on, that it needed a different way of expression and went from first principles instead of superficially trying to mimic other mainstream languages.
* Rust being a very error driven language is a good thing, but make it less suitable for prototyping the happy path. This is fine but a constraint that one should keep in mind.
* powerful pattern matching, dataless programming with traits, FP-like API on iterators and option/result etc. make it very expressive on a higher level, but more basic things like string operations and array indices (possibly other things as well) are far less ergonomic than in other modern languages. Typically in other/older languages you'd struggle with abstraction but the basic stuff is easy, in Rust you struggle with the basic stuff and abstractions are ergonomic.
* all around (a bit) too much emphasis on structs over hashmaps as the default data structure for associative grouping.
Things I like about Rust but are not for everyone:
* Rust is very error driven, you tend to write a lot of code around errors, options, results.
* pattern matching/destructuring is the main way to write branching logic.
* a lot of emphasis on enums (tagged unions) to express configuration, state, variants and things like that.
> Also the whole language suffers from being a weird mix of C-like statement syntax and FP-like expression syntax
I quite like this though. All other expression based languages like MLs and Haskell use indentation, making it very hard to parse. To me, Rust's syntax allows very granular control. I wish more things in the language can become expressions, like the let expression, for/while loop.
I agree that it is useful. But from the perspective of a learner it is a stumbling block. Rusts syntax seems to be a compromise to cater to c-likeness. Not a deal breaker at all, just something that I think is not as good as it could have been and will confuse adopters at least in the beginning.
On the flip-side it might attract more developers who are not familiar with anything that is not c-like.
As a side note, Lisps are expression based and have a very clean syntax where you don’t need to mentally construct an AST, it is literally there in front of you already.
It may take a long time to get comfortable with how to program in a way that satisfies the borrow checker. How much effort this takes will depend on your brain, and the type of projects you tend to work on. There are a lot of programs you can make where you just don't tend to run into borrow checker issues and everything is easy! Others it can immediately be a huge pain, and very frustrating.
It took me a few attempts at Rust to start building good intuition for it.
It is still our #1 complaint, and something we're still working on.
I recently came across a crate that still maintains compatibility for Rust 1.13. It compiles 2.75x faster on 1.41.1. We've made a lot of progress but there's still a lot of more work to do.
It really depends on what you're doing. An incremental change on a small project can debug-build in seconds on a beefy machine, syntax-check is even faster. If you're working on rust itself and do a full 2-stage rebuild to benchmark something and upstream changed the llvm revision in the meantime and you do that on a laptop it can take more than an hour.
> Are there any downsides beyond the niggle the author mentioned?
For me, using a GC is just a lot easier than making the borrow checker happy. Given that in practice it's rare that the GC is the bottleneck (and when it is I write around it after profiling), the trade-off just isn't worth it for me.
The biggest downside for me is that from time to time (rarely) the compiler really lets you down.
Right now I'm debugging the equivalent of #64650[1] but the cause appears to be distant from where the error is reported, triggered by monomorphization. I've seen other compiler behaviors in the context of async code that the maintainers could not explain.
That said: The compiler is a marvel, otherwise. Never have I had as much help from a compiler. I guess that's because most other languages have much less complex problems to communicate.
Compilation speeds are perfectly fast for small projects, especially without compiler optimizations, especially when re-compiling (it only re-compiles what changed). I certainly wouldn't call that one of the barriers to trying it out.
The downside is that the performance payoff is really small compared to easy to write languages like Java for most projects, and you will write your code slower.
> The biggest long-term issue in my mind is unsafe
I'd say that, but for different reasons. It's depressing how often you have to use unsafe. I would not mind if it was hidden in libraries tested within an inch of their life 99.9% of the time, but it didn't work out that way for me. Recursively defined data structures like trees were just a nightmare to do without unsafe's.
I thought that was perhaps because I just sucked at Rust, but then I listened to a talk from one of the Mozilla core devs working on Servo. Their code had more unsafe's than mine. The amount of parallelism they were trying to get was extreme of course, so it wasn't really much of a comparison. It made me feel much better all the same.
Seeing as the prime (and arguably only essential ^1) use cases for C are 1. O/S kernels and drivers, 2. embedded code for tiny microcontrollers, 3. language interpreters/runtimes and JITters, 4. bootstrap compilers, 5. portable command-line utilities, 6. low-level/mixed asm routines for performance, and 7. the large body of legacy apps of course, are there any examples for successful implementations for these categories in Rust? I'd imagine developing a nontrivial language runtime using Rust's memory model could be hard or impossible to do performantly. But that is or was the mainstay of C programming on Unix - implementing higher-level languages such as shell or awk.
^1: This is of course opinionated, but my reasoning for not including application code, and in particular long-lived evented or multithreaded app servers is that I personally think these are beyond C's memory management capabilities (even if you get malloc/free 100% right, there's still the problem of memory fragmentation) and are basically erratic outcomes of 1990's multithreading code originating from corouting in GUI programs
> are there any examples for successful implementations for these categories in Rust?
Depends on how you define "successful", I'll toss some things in. This won't be comprehensive.
1. O/S kernels and drivers,
Amazon has multiple projects in this space: Firecracker, Bottlerocket. So does Google: ChromeOS, Fuchsia. Kernel maintainers have said that they would accept non-essential drivers in Rust, but they want some examples first.
2. embedded code for tiny microcontrollers,
Tons of stuff here. Multiple RTOSes like Tock and RTFM. Lots of little projects. One of the first Rust-only consultancies does a lot of embedded contracts.
3. language interpreters/runtimes and JITters,
Most of the languages that have existed are bigger than toys, but don't have large user bases. Gluon and Dyon are two examples. JITs exist but aren't really "production", Cranelift has a JIT for example.
4. bootstrap compilers,
rustc is written in Rust (it does use LLVM though)
5. portable command-line utilities,
Tons and tons and tons. ripgrep is the most famous. On the "portable" end, I'm a Windows user, and most things I try Just Work, though they may print out funny paths sometimes. (if you escape a native\windows\path you'll get a native\\windows\\path since \ ends up getting escaped. Only a display issue though, things still work just fine.)
6. low-level/mixed asm routines for performance,
This isn't "for performance" but I like https://crates.io/crates/x86 as an example here. Inline asm isn't stable yet, but we're getting pretty close, it seems.
Exactly. That's why I don't think close-to-metal languages are the way to go for most apps (as in "C is not a general-purpose language"), except the ones I listed, and probably some. I admire the Rust community's energy and persistence to create a new zero-overhead language, but, for me at least, they're ultimately falling victim to the idea that an app must be a single giant monolithic binary. C and Unix grew strong because of small little composable programs, where each program's memory was manageable enough in a way multithreaded, let alone async/evented server-like programs aren't (if they need dynamic memory at all). But ever since idk Java app servers? people have this idea that they're better at memory management than the kernel + MMU when that isn't clearly demontrated at all considering that eg. GC overhead isn't insignificant in long-running programs. What I'd rather like to see is bringing down process spawning overhead (response latency), or at least a rational discussion backed by benchmarks why everybody (except unikernel folks) blindly follows the big fat app server idea when, from the outset, a process-per-request model with memory/permission/resource isolation clearly looks much saner given service-oriented workloads and troubles in recent years (eg side-channel attacks, DOSing, leaks).
This is only tangentially related to the article, but why does any discussion of Rust and/or Go asymptotically approach "RUST VERSUS GO RUST VERSUS GO", or at least have everyone and their grandma chip in on which one they prefer (like this very post)?
I'll admit that I only have experience with the former, but they seem like totally different beasts to me, intended for very different domains / target audiences. I figured that the holy war would be between Rust and the other trendy systems languages (Nim, Zig, Crystal) or between Go and other Web-oriented / glue languages (Java, Python, Perl, et cetera).
It's an accident of history and messaging. They were both publicly announced at the same time, and both described themselves as "systems programming languages."
As it turns out, the two languages have entirely different ideas of what "systems programming" means. And Go was announced very close to its 1.0 launch while Rust was announced very, very early in its development cycle. So in reality they actually align neither in timing, nor in target domain. Still, the initial contrast seems to have stuck.
The only real overlap is that both deign to be C successors. But in such different ways that the head-to-head comparisons make little sense.
There's also the perception that they're the only recently-created languages that have gained any real traction. You don't see nearly the ink spilt on Nim or Zig as you do on Rust or Go. So there's maybe some sibling rivalry there. (In this vein, I occasionally see comparisons of Rust vs Swift.)
> And Go was announced very close to its 1.0 launch while Rust was announced very, very early in its development cycle. So in reality they actually align neither in timing, nor in target domain.
Back when Rust was announced, it was much more similar to Go than it is now, even having a similar runtime for concurrency. Go did a better job than Rust of delivering this paradigm, largely because the Go implementors weren't relying on commodity technologies like LLVM and libuv.
As time went on, Rust was given more support by Mozilla, and it evolved into something more suited for browser implementation. It had other major influences at this point:
- The constraints of using LLVM. Many projects have failed trying to make LLVM do something that it's bad at doing. At the end of the day, LLVM is good at compiling C++ and things that look like C++, so if you stick to using LLVM and want an efficient language, your language will end up looking more like C++ over time.
- The swell of momentum around C++11. All of a sudden, programming idioms from the "Modern C++" movement were becoming mainstream.
- The halo of academic research into type-safe systems languages. Very few of the ideas in Rust are new, but some type system features from academia became more palatable for ordinary programmers when combined with the C++11 style.
> Back when Rust was announced, it was much more similar to Go than it is now, even having a similar runtime for concurrency. Go did a better job than Rust of delivering this paradigm, largely because the Go implementors weren't relying on commodity technologies like LLVM and libuv.
Or rather because the community decided to move away from this paradigm and towards a much more C-style view of systems programming.
I've heard this about LLVM a lot in the past; does LLVM recognize its C++-centricity as a problem in delivering on its mission of being a multilingual target platform, or has it tacitly changed its mission to be more-or-less implicitly C++-centric? And specifically, what sway does Rust hold on the LLVM project and its goals?
> does LLVM recognize its C++-centricity as a problem in delivering on its mission of being a multilingual target platform, or has it tacitly changed its mission to be more-or-less implicitly C++-centric?
LLVM is a mostly corporate-sponsored open source project. There isn't really a global LLVM roadmap, and developers usually work on things that help their employers.
There has rarely been explicit opposition to features that help other languages, but at the same time it is very difficult to make a large nontrivial change to LLVM at this point. Someone who is not already an active LLVM developer might be overwhelmed and give up on upstreaming their changes. IMO the most successful use of LLVM for a sufficiently non-C++ language has been Azul's work on last-tier JIT compilation using LLVM:
> both described themselves as "systems programming languages."
IIRC Rob Pike later commented that Go should have been advertised more like a "network services programming language" as it is better suited for writing network services than to write low-level components (os kernels, device drivers, embedded programs).
Learning Go and Rust recently, it feels like Go is aiming to be a better Java(GC, excellent libraries, feature rich std library, fast enough but easier on devs) and Rust is aiming to be a safer C.
Zig has a good safety story -- in fact, I think it might turn out to better than Rust's -- but it is very different. A language that uses sound methods to guarantee certain kinds of safety (as Rust does) is not necessarily the best way, and it certainly isn't the only way, to write safe code. The handwavy way to explain that is that a language that soundness comes at a cost, and getting rid of 100% of a certain class of bugs might overall be less safe than an approach that gets rid of, say, 98% of that class and can also help reduce bugs outside that class by similar unsound means. I.e. system safety and memory safety are very much not the same thing (although the former is a subset of the latter), and since the goal is system safety, it is unclear exactly how much it is worth paying to soundly eliminate all memory safety bugs.
I agree with the premise that there are multiple measurements for ‘safety’ and I absolutely do not believe Rust’s chosen method is superior to all others, but I’ve seen you get blow back in other threads for these statements before, so I thought I’d toss in some support of the overarching ideas before the long meeting of response come flying in.
Its weird people don't compare Go to Kotlin. It has channels and coroutines. I think Kotlin does better with contexts and I think flows are a really cool addition. There does seem to be a lot there but honestly, it reminds me a lot of Go in its pick up and play nature
Having used Kotlin, I’m now actively switching to Go. I find it much more enjoyable to write. But this is also highly personal opinion. I feel seriously lost in Kotlin’s documentation. Go is a joy to me.
While there's a lot of truth in this comment, at the same time, the domains are wide enough that there are a lot of similar usages, and teams that decide to use one, the other, or both.
True, but you could say the same of nearly any pair of general-purpose languages. Even Python and C overlap to some non-trivial extent. Go and Rust certainly have overlap, but I'm not sure I'd agree that they have an unusually high level of overlap.
I think the closest language to Rust in niche is, by far, C++. I'm surprised to see it compared less often to that than to Go.
I think the reason that C++ and Rust are compared less often is because the advantages and disadvantages of each are clearer. Rust has better memory safety and concurrency, is less focused on backwards compatibility and so doesn't bring a lot of cruft along, and so on. Of course, its ecosystem is far less mature and there are situations where backwards compatibility is paramount.
The advantages and disadvantages of the two may be clearer, but Rust is clearly most similar to C++ from a complexity and usage domain view. I don’t see why this is not the only discussion on ‘Rust vs’. The same arguments against C++ being compared to C are almost directly applicable to arguments comparing Rust to C, and the same goes when comparing against higher level languages. I get that Rust has its borrowing/ownership semantics and a ML inspired type system, but Rust and C++ are more readily comparable than almost any other languages in general usage.
I'd say it's in part because Go originated at Google and Rust originated at Mozilla. Two browser vendors announcing new languages at the height of the browser wars caused the languages to attract the same tribalism.
I don't think the tribalism is even really in the language communities themselves as much as it is among onlookers, casual users, or otherwise fans of either language. (Note that there is certainly plenty of enthusiasm for each language in its respective community).
Rust in particular seems to have a large following of people who aspire to use it but haven't really learned it or used it in anger (this especially used to be the case). Basically they (like me) appreciate the ideas of strong static guarantees, strong performance characteristics, and the general aesthetic of Rust code compared to other high-performance languages or other "functional languages".
But those people (and to a lesser extent, many in the Rust community proper) tend to believe that Rust is optimal for those use cases that are conventionally considered to be Go's purview, which is to say they believe Rust is more productive than Go, Python, Ruby, Java, etc for the sorts of applications that those languages are regularly used for (SaaS application backends and other network services). They argue that you just need a little more experience with Rust and you will hit some productivity nirvana; however, each year, I (and many others with experience in this domain) get another year of experience with Rust (as a hobbyist--not full-time development) and still I'm far more productive in Python and Go than I am with Rust.
And I don't think the "just need a little more experience" argument holds water. It's based on the idea that the borrow checker and memory management in general become progressively more natural. Well, I have a decade of C and C++ experience and manual memory management is very natural (almost said "super natural") to me, and the trend is asymptotic--with more experience you do indeed get better with the borrow checker and memory management, but it never quite reaches the point where it's in the same ballpark as a garbage collector (where you truly don't need to think about memory management outside of hot spots).
Of course, GC languages also don't help you deal with race conditions, but 99% of the code we write in these domains isn't subject to race conditions (and if you are writing something that is pervasively subject to race conditions, it might well be worth considering Rust).
Note that there are a handful of very smart, well-respected (and deservedly so) people who also have lots of experience with Rust and who are familiar with the domain in question who disagree with me. I think theirs is the niche opinion, but I think it's worth hearing them out and forming your own opinion.
I agree; however, a point of clarification: If you model the resource as an object then the ownership system precludes concurrent use of that resource within the process (the ownership analog for RAII). However, if you have multiple processes attempting to access the same resource, then no, of course those static guarantees don't help, and (perhaps to your implicit point) this multihost/multiprocess model is the more common case in saas applications.
Rust and Swift though do have a very close relationship IMHO. They have similar design decisions for ergonomics, and often borrow (no pun intended) from each other.
You must also take account that people not always look at a language per by their MOST perfect domain fit.
For me, and probably many I look in the rust sphere(and go and swift and F# where I also have worked), this langs are not picked because "I need bare-metal baby!" but "I need something a bit faster/easier to deploy/less resource/less crazy intensive than obj-c/python/ruby/js/php".
I have picked all 4 (rust, f#, go and swift) because that, because all (except go) have AGDTs and pattern matching (and go only because I need some utilities easier to cross compile FAST) and that is all.
ALSO
because are langs FRIENDLY to solo(me!)/small teams of developers. Even If i must pick a "system language" I will NEVER pick C++, too much complexity. That is why I pick F# vs C# (F# is more economic in lines of code, more expressive and overall simpler), Swift as fast I as could (vs obj-c) and Pascal over C/C++ in the old days.
Looking this way, them intersect much more than most imagine.
May I ask how you experience with F# was and how you used it ? I'm slowly learning it but I'm not 100% sure to keep going, maybe switch to other similar languages with bigger communities/bigger market demand
For example, while I do find quite interesting what Rust developers have managed to do with linear types, to the point that they have influenced language designers from D, Swift, C++, Haskell, OCaml to find ways to adopt them into their language designs, if Go community would be more welcomed to modern language design, that would be enough for my use cases.
As I am a firm believer, that even it takes beyond my lifetime, all modern OSes will eventually be like Mesa/Cedar, Oberon, Singularity, Midori,....
By the way, Java is not a Web / glue language, there are plenty of embedded bare metal deployments using it, e.g. PTC, Aicas, IBM, Gemalto, Kyocera, Ricoh, all have offerings in that regard.
If you want examples for systems programing in Go have a look at Android's GPU debugger, gVisor hypervisor, F-Secure USB key, TinyGo for micro-controllers, or its boostraped compiler.
They have very different design philosophies and were announced more or less at the same time, that's why people always compare them and feel like they have to pick a winner, despite them targeting very different niches.
It is interesting (to me) to compare factorial in Rust with factorial in Hope:
Rust:
fn factorial(i: u64) -> u64 {
}Hope:
dec factorial : num -> num;
--- factorial 0 <= 1;
--- factorial n <= n* factorial(n-1);
Note that, in Hope (unlike its own inspiration, Prolog, and, I think unlike Rust), the order of the rules does not matter: the most specific pattern takes precedence. Hope was primitive with respect to types, and did not use Milner's type inferrencing ideas. I don't think Burstall ever intended it to be a "real" programming language. When I left Edinburgh, Don Sanella took over from me, so I was not involved in writing the paper about Hope, but my contribution is acknowledged.
I implemented the pattern matching code, and the idea of pattern compilation occurred to me. I remember showing it to Rod. He freaked out at first, but after about a five minute harangue, the penny dropped, and I remember him saying "clever Michael", "clever Michael" in his charming way.