Dave Herman has been a high signal-to-noise contributor to Lambda the Ultimate over the years; if you want to get an idea as to where he is coming from, looking at his contributions there [1], his slightly active blog [2], and on his now pretty much inactive joint blog [3] are good places to start.
I've heard a lot of engineers complain about engineering managers over the years. I've also heard great managers self-deprecatingly say they "don't do any work". I've seen managers feeling bad they weren't productive so they resorted to writing code to feel that they were having impact.
This post is an example of the incredible impact an engineering manager can have without writing a line of code.
> A little appreciated fact: Rust was largely built by students, and many of them interned at Mozilla.
The article doesn't mention it but this is of course a very good long term strategy. Things students learn during their formative years at university will bear fruit once they enter the work force.
How many of those students are now at or about to enter important positions in the industry?
Do you think Java could have become this big if it wasn't teached at so many universities at the entry level?
Having so many students involved in Rust was huge. Definitely the most rewarding thing about working on Rust was seeing students get involved, grow, then turn that experience into a career, while seeding the industry with Rust talent.
It is a double edge sword. No doubt if the students are motivated and dedicated, they could build the next metaphoric rocket ship. Also they also runs the risk of causing the project to off course and failed.
It sounds like the leadership of Dave was key to getting the team to focus and to delivery high quality results.
The converse is also true: students are relatively less fettered by today's "best practices", less constrained by non-academical pursuits, and thus would be more capable of dreaming up a better paradigm for the future.
I swear I recall being a fly on the wall and just being in awe; early on in my career just listening to Dave Herman, Luke Wagner, and Alon Zakai discussing asm.js which would one day lead to webassembly. It was very early days even for asm.js, but from how they spoke of it and it's potential you could tell it would have huge impact on the industry.
Reading "JavaScript: the first 20 years" [1], Dave Hermans name is popping up here and there as well. His contribution in this case was highlighted in the story of the (failed) ES4. I wonder how these early experience has shaped his thoughts on Rust.
Have you ever helped a firm use rust on airgapped or nexus ptoxies developer networks? The nexus module appears abandoned, and the documentation on running an offline crates.io mirror is very lacking.
I haven't heard much from him lately not sure what he's up to.
I'd love to see an interview with him on his experiences with dependency management. What did he learn? What things in earlier attempts did he change in later attempts (from bundler to cargo to yarn)? What does he think he got rightest? What mistakes does he think he made? What mistakes does he think other people are still making? What features of a language or platform faciliate or challenge good dependency management? etc etc etc.
In general, I think there isn't much "learning from prior art" in our field that crosses language/platform boundaries. Katz being personally driving the implementing popular dependency management solutions on three different langauges/platforms was one way to actually learn from cross-language experience! I think in dependency management in particular, there are still a lot of lessons people are learning on their own not realizing another language/platform already learned some painful lessons on it... or maybe differences in langauges/platform mean some things aren't transferable...
The name Katz set off bells in my head while I was reading the article, but it was only until I saw your comment that I realized why. Cool bit of connection-making there!
The OP mentions in passing that Brendan Eich "was solidly on team Rust" prior to leaving Mozilla, but adds no further details to that intriguing statement. Wouldn't that make Eich the Most Unrecognized Contributor? I don't think his name would be on any commit repo, after all.
I understood that point as relating more to management than technical contribution - that the support of people like Eich was imperative to keep the project funded, and as key supporters left the project became more susceptible to cancellation.
Thanks for digging this up. Honestly I was doing more for SpiderMonkey than for Rust at code level, but at exec sponsor level, it was me and only me. HTH
Having sat next to / worked with a lot of the rust core people (and Eich) I think while his support was important, that doesn't make him more important/pivotal than people like Dave Herman
It’s cool to erase me, no problem. Inconvenient facts: I hired Dave, as well as (not as hiring manager but by influence) Graydon. Also I’m cofounder of Mozilla Research and C-level advocate for grad student internships. I’m not taking more credit than them, I want less! But don’t erase sr. management or give anyone else at that level undue credit. It was a team effort, but while there is no “I” in team, there is no budget without C level sponsor.
>I don't think his name would be on any commit repo, after all.
Not only his name is on the Rust commit repo, he's also in the list of top 10 commiters the article features as an early Rust development example (although for just 6 commits or so).
The big takeaway for me is that inside our daily corporate world, we should strive for more open discussion about the work we do - I keep trying to do more discussion-list style work with my team, even though it often is easier to jump on a video call.
It has a payback a long way down the road but my gut knows it is worth it.
Having written Racket code myself I was surprised when I saw that Rust had Hygenic macros.
I've never learned how to use macros effectively, but once I've gotten more comfortable in Rust I'd like to give it a good shot. Given Dave's background with Racket and Macros, this feels like a worthwhile endeavor.
On a side note there are a lot of cool language oriented programming stuff that Racket does through it's macro system, and I wonder if it is at all possible to do something like that in Rust
I was super-surprised when I used rust macros (specifically macro rules) about how lispy it was.
There was an aha-moment when I realised that it was token based and not AST based like some other macro systems I've worked with in some other languages.
Case in point is how to express varargs in macro rules which is expressed roughly like a comma followed by an expression repeated n times. Super nice.
I love the designed by committee aspect of Rust. I have great respect for good committees. I have served on a few (and some bad ones, shiver). One of the things that makes a committee work is a good facilitator
As the saying goes: A camel is a horse designed by a committee: Goes twice as far on half as much carrying twice the load....
Downvoters on HN are not required to explain why they downvote. That would just lead to massive numbers of low-quality "explanations" and tons more flamewars.
Downvoters on HN are not required to explain why they downvote
Maybe by this point this belongs in the document that talks about voting meta, given that it's one of the most persistent and common voting meta comments.
If your affirmation is true, why many of HN users (me included) get [dupe] at first place? In general, Is not the HN intention to decrease number of duplicates? If not so then HN should provide a solution for prevent this, such prevention that is not happening today. Example search input.
Sorry but I'm seeing big contradictions in the so called rules.
Just one example:
> If a story has not had significant attention in the last year or so, a small number of reposts is ok. Otherwise we bury reposts as duplicates.
> "in the last year or so"
Not in the same month or even after some hours later.
> "a small number of reposts is ok"
Well, HN what to prevent duplications or not ?
> Dupes are not against the rules. You're likely getting downvoted by people aware of that.
With that premise in mind, let's continue making duplicate content. right?
So don't misunderstand me, since my intention here is just to put in evidence the same recurrent problem to improve HN. For instance I have also faced the same problem in the past so there is nothing agains rules or things like that, but instead is more a improvement for HN.
The reason because I'm interested on this is because I feel part of the community and this is a concern for me (I suppose for others too). Otherwise I could take the easy and dirty way to just say nothing and ignore the problem.
The issue with duplicates isn't reposts of articles as such, it's not wanting significant duplicate discussions. We allow reposts as a way of mitigating the randomness of /newest. But once a story has gotten significant attention, we bury reposts as duplicates. (But after a year or so, enough time has gone by that a repost is ok again.)
In the current case, the previous submission didn't get much attention, so we didn't count this one as a dupe. In the case of your post https://news.ycombinator.com/item?id=26812145, the previous submission of the story did get a big discussion, so we marked the repost as a dupe. There's no contradiction.
Does that make sense? If not, take a look at some of the previous explanations. If you still have questions after that, let us know.
> > If a story has not had significant attention in the last year or so, a small number of reposts is ok. Otherwise we bury reposts as duplicates.
> > "in the last year or so"
> Not in the same month or even after some hours later.
A year is the expiration date on reposts of items that _have_ had significant discussion. Items which did not have significant discussion, and also did not have a large quantity of reposts already are exempt. In some cases the mods have reached out to submitters and suggested they repost items which the mods felt were interesting but did not catch on.
Unfortunately this is not clarifying my concerns about duplications in this particular case. I understand about the year of expiration and moderators can reach out submitters but for me is not clear what `significant discussion` means as well as `moderators could suggest submitters repost items which the mods felt were interesting but did not catch on`.
Essentially I still don't understand why some get label as duplicated and some others not.
Anyway, Unfortunately https://news.ycombinator.com/newsfaq.html is not clear at all for me and I think HN needs to make it more obvious to help members to understand without doubts or contradictions, which was the case for me.
So I will contact HN soon in order to request an account deletion.
Thanks for try to explain me about it.
PD. Sorry for the deviation of these comments from the main topic here.
But I think my concern is already expressed on previous comments.
Rust maybe a little adhoc in places (e.g. the misappropriated Haskell/ML function syntax, enum/struct asymmetry), but overall it is a fantastic effort. It is not an easy task to combine an advanced static type-system with mainstream ergonomics, but they seemed to have pulled it off. The fact that it is also not owned and controlled by a single big tech entity is icing on the cake. I really hope it achieves even greater success.
Personally, I find Rust syntax to be well-designed. At least, compared to any practical programming language I know.
Quite a few times I was surprised that Rust breaks with some old patterns that were copied over and over in the last 50 years or so. For example: "match" instead of "switch", or the same if/else regardless if it is a statement or a value. These are small touches, but they show attention to detail.
> Quite a few times I was surprised that Rust breaks with some old patterns that were copied over and over in the last 50 years or so. For example: "match" instead of "switch", or the same if/else regardless if it is a statement or a value. These are small touches, but they show attention to detail.
These are good ideas for sure, but they are bread and butter to anyone who is familiar with functional programming. The Lisp and ML families of languages have had them for many decades.
The fact that more languages _aren't_ like this is what's surprising to me, given how effective they are. I love that Rust is bringing them to the masses, but what on earth took the industry so long to accept them?
I guess the answer is that algebraic data types (inductively defined data) tend to be pitted against object-oriented programming (coinductively defined data), and object-oriented programming has dominated the industry for the past 3 decades. Some languages like Kotlin have tried to combine them, but personally I'd rather just embrace the former and relegate the latter to a seldomly-used design pattern, not a programming paradigm hardcoded into the language.
Rust syntax is weird. Weirdly good and sometimes bad.
I'm currently designing my own toy language and writing the compiler (to LLVM IR) in Rust.
Representing the AST with Rust's sum types is so simple. Visiting that AST through pattern matching is great. But the "enum" keyword still bugs me.
The way you define product types (tuples, records, empty types) and then their implementation, just awesome. But the "struct" keyword still bugs me too.
It feels "high level" with some quirks.
Then you have references, Box, Rc, Arc, Cell, lifetimes etc... It feels (rightfully) "low level".
Then you have traits, the relative difficulty (mostly for Rust newbies like me) of composing Result types with distinct error types, etc...
It feels "somewhat high level but still low level".
Sometimes you can think only about your algorithm, some other times you have to know how the compiler works. It seems logical for such a language, but still bugs me.
The one thing I hate though, is the defensive programming pattern. I just validated that JSON structure with a JSON schema, so I KNOW that the data is valid. Why do I need to `.unwrap().as_string().unwrap()` everywhere I use this immutable data?
[0] an enum of numbers would be an issue for instance, though I guess you could always use a `repr(C)` enum it might look a bit odd and naming would be difficult.
In general, JSON Schemas are (wrongly, in my view...) validation-oriented rather than type-oriented (for notions of types that would be familiar to Haskell, Rust, or Common Lisp programmers).
I think that schema in particular could be represented, though, as:
Yep, I'm still new to serde and the Deserialize workflow :)
I come from highly dynamic languages, and even when I was doing C/C++ 10 years ago, I would do more at runtime that what could be considered "best practice".
As the input gets more dynamic, so does the type system representation. If you want to handle user-supplied JSON schemas, in my understanding of JSON Schema, you'd have to use the serde_json::Value way: https://docs.serde.rs/serde_json/#operating-on-untyped-json-...
> What about user-supplied JSON schemas? You can't add types at runtime.
Right, well, since they're validators anyway, might as well represent them as a defunctionalized validation function or something.
Agreed that this is more-or-less past the point where the type system helps model the values you're validating, though a strong type system helps a lot implementing the validators!
> It's still a string, in Rust you would need an enum and a deserializer from the string to the enum.
Yep, though if you really wanted it to be a string at runtime, you could use smart constructors to make it so. The downsides would be, unless you normalized the string (at which point, just use an enum TBH), you're doing O(n) comparison, and you're keeping memory alive, whether by owning it, leaking it, reference counting, [...].
Thankfully due to Rust's #[derive] feature, the programmer wouldn't need to write the serializer/deserializer though; crates like strum can generate it for you, such that you can simply write:
(strum also has also a derive for the standard library Display trait, which provides a .to_string() method, but this has the disadvantage of heap allocating; EnumString (which provides .as_ref()) compiles in the strings, so no allocation is needed, and .as_ref() is a simple table lookup.)
> Yep, though if you really wanted it to be a string at runtime, you could use smart constructors to make it so. The downsides would be, unless you normalized the string (at which point, just use an enum TBH), you're doing O(n) comparison, and you're keeping memory alive, whether by owning it, leaking it, reference counting, [...].
Nit: it's more constraining but serde can deserialize to an &str, though that assumes the value has no escapes.
Unfortunately Rust is currently lacking structural records/structs and enums. I think they removed them early on in the design. So you'd have to name the all the types. I hope they do add them back one day.
I think that there's still time for Rust to reverse the .expect(...) naming mistake. It should be deprecated, either introducing something with an appropriate name like .unwrap_or_panic(...) or just leave it to be replaced by .unwrap_or_else(|| panic!(...)) which IMO one ought to use today instead of .expect(...).
Everything else is named so well and designed so well that this really sticks out.
That reads as "I expect this to be already validated". Is that what you intended it to mean? Similarly,
foo.expect("There was a problem and I had to crash")
That _reads_ as "I expect there to be a problem and have to crash". But it _means_ something totally different. It means:
There should not be a problem, but if there were a problem
I would crash and print out this message.
And the way to write that in readable code is
foo.unwrap_or_else(|| panic!("There was a problem and I had to crash"))
I'm sorry if I'm misunderstanding -- if I am can you explain to me how? Because I feel like my view is diametrically opposed to yours but it's really not that subjective a matter; only one of us can be right.
You just phrase things the opposite way. I don't think it's clearly better to say the expected happy path or the unexpected sad path in a crash message.
> That reads as "I expect this to be already validated". Is that what you intended it to mean?
Yes
> foo.expect("There was a problem and I had to crash")
> That _reads_ as "I expect there to be a problem and have to crash". But it _means_ something totally different. It means:
Absolutely. That's why I'd phrase it foo.expect("foo invariant upheld")
OK, that would be nice but...what you're saying just doesn't work! The user gets an error message saying
thread 'main' panicked at 'foo invariant upheld'
And everyone, except those who are inured to the illogic via experience, will read that as
thread 'main' panicked because 'foo invariant upheld'
especially new users, less experienced programmers, and non-Rust programmers seeing the panic message.
Now if the Rust language made that error message say
thread 'main' panicked at failed expectation for 'foo invariant upheld'
that might be cool. But it does not do that. It feels like you're bending over backwards trying to excuse what is clearly a wart in a beautiful language. I'm happy to continue the debate though, I am fascinated to know how I can be wrong here :)
So that's my main point, that your strategy doesn't work due to the error message. But a secondary point is that Rust documentation encourages people to use it opposite to what you suggest: I mean, just look at the first example in the Rust book. It's completely nonsensical:
let f = File::open("hello.txt").expect("Failed to open hello.txt");
I know there was some controversy about this early on and somehow it didn't get reverted; it would be more honest if the docs would at least admit that it's "a little strange".
Outside of tests and toy examples you're generally expected to only panic when the developer has failed to uphold an invariant, not when the user has. As a result there's generally nothing a non-rust-developer can do with the message besides post it on the bug tracker.
Right, so that's relevant to a minor incidental comment I made. But what do you think of the main points, which are
(a) If you write the message the way you suggest the printed error message is misleading, and
(b) The Rust community does not encourage using .expect() in the way you suggest; in particular the canonical Rust documentation source instructs us to use it as
First let me say: I like Rust. I'm a fan. But... it did make some early decisions that are going to be hard to shake off, most notably around build times. This [1] is well worth a read.
They are awesome but they make compile-time arbitrarily bad. The D compiler is roughly as fast as the Go one. However, D has macros though and that makes it very slow to compile sometimes.
The alternative to macros are code generators. Works fine for bigger stuff like a parser generator but not for smaller stuff like a regex.
It's better to evaluate at compile time, if possible, rather than runtime. The biggest issue with macros is code bloat, but every serious general purpose language should have them.
I think a problem is that programmers have little intuition about about the evaluation time of macros.
It's possible that Zig's approach helps here -- since the metalanguage is just the language, you can take some of your intuition about performance along to compile time. And the macro language is not weirdly restricted, so you can write something you're more used to, with similar idioms.
In the limit this is clear: I'm using several Python code generators C++ in https://www.oilshell.org, and it's easy to reason about the performance of Python. Doing the same in C++ metaprogramming would almost certainly be a lot slower. It would also give me less indication of code bloat; right now I simply count the lines of output to test whether my code generator is "reasonable".
e.g. I generate ~90K lines of code now, which is reasonable, but if I had 1M or 10M lines of code, that wouldn't be. But there's not that much of a sanity check on macros (except binary size).
Macros and compile-time function evaluation are different and fill different roles. Macros define new syntax, while compile-time function evaluation evaluates expressions written using the existing syntax. The corresponding Rust feature for the latter is not macros, but rather "const fn".
I think they both have potential performance issues in the compiler though, and both can be compared with textual code generation.
I think of one as metaprogramming with the parser and the other as metaprogramming with an interpreter. (And the C preprocessor is metaprogramming with only a lexer. Code generation is the kind of metaprogramming that every language supports :) )
Although maybe you're saying Zig doesn't have the functionality of Rust macros, and that could be true; I haven't played with it enough.
However I do think there is overlap as Zig implements printf with compile-time evaluation and Rust does it with macros:
In zig you get a compile error if you exceed 1000 backwards branches (e.g. a loop or a function call). If you want to raise the quota you bump it with e.g. `@setEvalBranchQuota(2000);`. This is how we solve the halting problem :)
Anyway if you want to know why your compile time is slow in a given zig project, you can probably get pretty far by grepping for calls to that builtin.
The question is whether you do the meta programming/code generation within the language or as external tool. Code bloat is an issue in both cases.
Macros make it easy to write code which generates code which generates code which... This enables some wonderful use cases, often around generating type declarations. If done with an isomorphic language, you can also reuse the same code for compile time and runtime implementations with just thin wrappers.
External code generators however will build faster because the build system takes care of reusing the intermediate code.
Yeah, the build times. How does a normal Rust developers development environment look like? Do you have to rebuild after each change if you want to try out the change itself, after you've written tests and so on? How is the REPL experience if there is one?
My only experience with Rust so far has been trying to learn it by writing applications in it and also use 3rd party CLIs, but quickly loosing interest because the "change <> try out change" cycle has been too slow and cumbersome, and installing/compiling dependencies take fucking forever, even on a i9-9900K.
It caches things between builds (dependencies in particular only have to be built once), and if you use dev builds (the default) it doesn't take as long as production. For ergonomics you can also install cargo-watch (https://crates.io/crates/cargo-watch), which helps a bit.
An important thing though, if you aren't doing this already, is to not wait for a full build to know if your types check out. You can use cargo-check if you prefer (https://doc.rust-lang.org/cargo/commands/cargo-check.html), but really I recommend using an editor with immediate feedback if at all possible. rust-analyzer (an LSP) is one of the best, and should be available even if you're on Vim or something.
Using Rust without snappy editor hints is fairly miserable because of how interactive the error feedback loop tends to be. If you don't rely on a full build for errors - just for actual testing - I find the build times to be perfectly livable (at least in the smallish projects I've done).
When I use rust, I find compile times faster and more manageable than other languages due to the speed of iterative compiles. Compiling from scratch is very slow, but iterative compiles are faster than most of my golang compiles and faster than running a JS builder in most projects. To make it extra fast, I follow the instructions from the bevy game engine[1]. With that setup, the feedback loop is quick.
> iterative compiles are faster than most of my golang compiles
Maybe you're comparing apples to oranges here. I've worked professionally with both Rust and Go for years now on a variety of real world projects, and I've never seen a similarly-sized Go codebase that compiles slower than a Rust one. If you're comparing incremental Rust compilation to first-time Go compilation, maybe they could be competitive, but... Rust is incredibly slow at compilation, even incremental compilation.
Yes, using lld can speed up Rust compilation because a lot of the time is often spent in the linker stage, but... that's not enough to make it as fast as Go.
YMMV, of course, but... my anecdotal experience would consider it disingenuous to say that Rust compile times are an advantage compared to Go, and I'm skeptical that Rust compile times are even an advantage compared to the notoriously slow webpack environments.
Rust is good at many things, but compilation speed is not one of them. Not even close, sadly. "cargo check" is tolerable most of the time, but since you can't run your tests that way, that's not actually compilation.
Possibly, when working with big codebases I'm typically working on Kubernetes controllers in both Golang and Rust. That makes for extra slow golang compiles, and the rust incremental compiles are significantly quicker in comparison. Otherwise the codebases tend to be quite small, and for those golang full compiles (the only option?) and rust incremental compiles are similarly nearly instant.
It is absolutely apples to oranges, but if you just care about the everyday local workflow and ability to iterate and test, it's close enough most of the time to not be much of a problem in either case.
I'm sure this experience isn't guaranteed for all codebases, and it certainly helps that I make heavy use of crates, which would minimize the work required during incremental compilation. Though I'm not actively going out of my way to optimize for incremental compilation really, beyond the config linked above.
Kubernetes is, unfortunately, not typical Go code. It uses a Makefile, which is never encouraged in Go, and it has a bunch of custom build scripts managed by that Makefile. I was looking at the Kubernetes issue tracker for other reasons, and came across one ticket open right now that mentions "fix excessive `go list` use in build scripts causing extremely long build times", so they know their build times are awful compared to what they should be. They don't even use Go Modules in the main Kubernetes code base, relying instead on $GOPATH which has been soft deprecated for years now, and hard deprecated possibly by the end of this year.
Plus, Kubernetes is sitting at 5 million lines of Go code, by my count. Try compiling a 5 million LoC Rust code base... I won't wait around.
I would assume that a controller written in Rust bypasses all the complex legacy of the Kubernetes code base, and that's why it can compile faster. If someone made a similar project in Go[0] to write Kubernetes controllers in Go without depending on the mega-Kubernetes code base, I'm sure it would be incredibly faster at compiling than the Rust version.
Beyond that, I've heard that the Kubernetes codebase is internally just a nightmare of basically untyped `interface{}` stuff floating around everywhere, which would make the development experience subpar. I don't know how much this is exposed to custom controllers.
So, if Kubernetes is your only experience with Go... I'm sorry you've had to experience that. It's a product that people seem to agree is functional and works most of the time, but I can't remember hearing any positive experience from people working on it. From what I understand, it was originally prototyped in Java, and then hastily rewritten into Go before public release, and I'm sure that didn't help things.
About on par with C/C++ and Go. In that you don't have one and don't want for one. REPL driven development is difficult with languages like Rust, both to implement and use.
I think there are some projects floating around out there, but I personally don't see a purpose for one. It's not python or matlab.
The latest TWIR development summary mentions evcxr, a notebook-based environment for Rust that's currently being worked on, link https://blog.abor.dev/p/evcxr
Notebooks work better than raw REPL's for a language that's so heavily based on static typing, but they're idiomatically quite similar.
Clean compiles do take a moment. Comparable to heavily templated C++ in my experience.
On the other hand iterative development with rust analyzer going and all the dependencies already built is pretty painless. By the time you run your tests or program, it's likely to take only a few seconds to build.
That said you can write terribly long to compile rust. Usually there's some trades you can make to weigh compile time as more important than flexibility or static code paths.
I've written a sizeable ad server in rust with maybe 25kloc. Where I was the sole developer and sys ops person, it cost me a little upfront time but saved me many more hours in operations work.
I guess by in general a "dynamic programming languages" fan, "only a few seconds to build" already sounds like a lot. Different tradeoffs I guess, but hard to justify when you're used to ms compile time.
> How does a normal Rust developers development environment look like?
In my experience, you write some code, the rust analyzer (which is easily embedded in an IDE like VSCode or IntelliJ - which has its own "analyzer" I think) gives you immediate feedback, so you know immediately if things compile or not (there are a few edge cases the analyzer might miss, so when you actually run "rustc" something doesn't compile, but it's pretty rare)... you then write a little test , and Rust has many ways of letting you do that (unit tests right into the same file as the code being tested, integration tests which let you use the code as if from another crate, and even doctests, which are like unit tests but embedded in the documentation of your code)... running the tests is a matter of pressing a button and waiting a few seconds (compilation + test runtime) normally, unless you change dependencies between runs as that requires downloading/compiling your code AND the dependencies, which can be very slow (dozens of seconds)... which is the same problem as with a fresh build, which will almost certainly run in the minutes because of the necessary local compilation of all dependencies... but the experience is not very different from something like Java or Kotlin IMO (but definitely a slower cycle than Go, for example).
I'm surprise to hear that it was so slow on a powerful machine. In my experience throwing more compute at a codebase can make compile times very, very fast, though of course only after the first run. It parallelizes pretty damn well.
Building an ecosystem on top of a advanced strictly typed language also has an initial hurdle that requires a lot of effort to overcome. Potential tool developers must go through the effort of becoming familiar with the language and seeing the potential for, as well as the path to major improvements.
But once they're there, the result is extremely advanced tooling that other language ecosystems have taken years to develop. Things like IDEs, debuggers, static analyzers, superoptimizers, fuzzers, verification tools, build systems, language interop adapters, and code generators, are all examples of things that can take advantage of advanced type systems to develop strong capabilities extremely easily.
I think Rust is starting to show signs of such benefits, and I think in 10 years we'll all be looking back at it with surprise that anyone ever doubted it.
In Rust, a function definition left-hand-side looks like an annotated pattern, e.g.
foo(x : int)
Therefore, one would expect to annotate the return type as,
foo(x : int) : string
Since the pattern is showing foo applied to x. The Rust syntax is actually confusing for both Haskell/ML programmers (where the arrow comes from) and mainstream programmers. It's too small an issue to change now though.
Rust's support for proper "algebraic data types" is very good and gives it an advantage over languages like C++. However there are some small surprises, such as forcing all enum constructors/fields to be public (one must therefore wrap it to make an abstract data type).
Every language has its warts and these are particularly minor ones.
That would imply that "foo(x: int)" is a string rather than a function.
Haskell doesn't use that notation either, it uses -> both for the parameter list and for the return type, and separates argument names (arguably, these are poor choices, since currying is not an efficient CPU-native operation and not intuitive so distinguishing between multiple arguments and returning closures is useful, and argument names are useful for documentation).
> That would imply that "foo(x: int)" is a string rather than a function
But foo(x : int) is a string! It literally reads "foo applied to x". In the function definition, it appears to be used as a left-hand-side pattern which is "matched". The definition is written as if to say, whenever the term foo(x) is encountered, use this definition here. At least, that was my expectation.
> Haskell doesn't use that notation either
OCaml does and Haskell once had a proposal to add it. Haskell type signatures are normally written separately, but it does support annotating patterns with the right extensions.
No, foo(x: int) is not a string, it's not even an expression, it's not even an AST node. It's a fragment of the larger ast node
fn foo(x: int) -> ReturnType {
body
}
The ast here splits into
Function {
name: foo
signature: (x: int) -> ReturnType
body: body
}
I.e. the arrow binary op binds more tightly than the adjacency between foo and x: int. And the type of foo is a function, not a string.
A "better" way to write this (in that it breaks down the syntax into the order it is best understood) might be
static foo: (Int -> ReturnType) = {
let x = arg0;
body
}
Or to put it another way. Reading foo(x: int) as "foo applied to x" in this case is a mistake, because that's now how things bind. You should read that "foo is a (function that takes Int to String)". It's a syntactic coincidence that foo and x are beside eachother, nothing more.
Ya, I'm not really going to defend the current syntax past "function syntax is hard".
It's mixing up assigning a global variable, specifying that variables type, and destructuring an argument list into individual arguments, in one line. I've played at making my own language, and this is one part that I've never been satisfied with.
Personally I'd probably at least go with a `foo = <anonymous function>` syntax to split out the assigning part. But that's spending "strangeness budget" because that's not how C/Python/Java do it, and I can understand the decision to not spend that budget here...
I don't think that would be the implication, because we are in the context of a function. Otherwise, we should also not write foo(x: int) but foo: int -> ?
I wouldn't say this is a wart or confusing (to me at least). As someone who uses Haskell, C++, and Rust regularly, I just accept that each language has its own syntax. It's true that Rust borrows ideas from many languages, but I view Rust's syntax as its own thing, and the meaning of the symbols are what they are. It doesn't have to do things the C++ way or the Haskell way. It does things the Rust way, and that's not a wart.
Having used both Haskell and main stream programming languages I did not at all think that was confusing. The type of "fn foo(x: int) -> string" is quite obviously "fn(x: int) -> string" for people coming from languages like C. I do not see how a colon would make anything more clear. Imagine the function "fn bar(x: fn(x: int) -> string)", would that be more clear with a colon?
On the other hand the enum thing is certainly surprising.
> Imagine the function "fn bar(x: fn(x: int) -> string)", would that be more clear with a colon?
In your example, why bother naming the inner "x" variable for the function param? It cannot be used on the right-hand-side (definition of "bar"). For that reason, the notation is not exactly "clear". In OCaml the annotation would be:
Ocaml's syntax is more consistent, I agree, but its colon operator has different precedence than in Rust, so I am not sure its rational applies to Rust.
This is true but Python has a user community which is several orders of magnitude broader, which confuses the issue a bit. These days if I was designing a language I'd probably ask “How would I explain this to someone who learned JavaScript/Python?” since even if you have great reasons for doing things differently it's a pretty reasonable way to predict sources of confusion for newcomers.
> A little appreciated fact: Rust was largely built by students, and many of them interned at Mozilla.
So now companies make billions off of the software written in Rust and has even one student become a millionaire?
Companies that appropriate such projects should start paying their fair share to people who made it possible for them to make such profits.
Yes workers are exploited, but it's also harder to create value when you insist on capturing it.
This is arguable a big idea of free software: trying to jealously hoard the value for ideas that are naturally freely copied and shared is just plain inefficient.
So yes, in this case it would be nice if these interns got a big pay out, but if they did, then the megacorps wouldn't bother using Rust because they plan is already to just outbid all the non-monopolists for workers rather than actually be productive with their workforce. And C++ and whatever else are already free, so Rust has to complete with those.
The only way to make things more fair is just give up on meritocratic value capture, and just do a big tax and big UBI, so just as free software is free to use by all, some of the value created in the use of free software is also freely shared by all.
I don't know what your definition of corporate slave is, but you can certainly become wealthy off a salary.
Just limiting to software engineering (there's other lucrative fields out there) you can easily make a six-figure income remotely. This gives you the freedom to live somewhere with very low cost of living. It's not hard to build wealth this way.
You could debate that getting the necessary skills to get a job like this is harder than it should be.. that corporations themselves are broken.. whatever your worldview is that's fine but "cannot get wealthy off salary" is just plain false.
> Under the current framework, you cannot get wealthy off of salary.
You can make hundreds of thousands of dollars a year as an engineer, without even talking about stock. I'd call that wealthy. If you're an engineer of some stature, 7 figures TC yearly is not out of reach.
This is true, but the decision was made far back enough that other design decisions would have been weighed with consistency with the short behaviour in mind had that been the option chosen, and Rust would work slightly differently than how it currently works.
This is in the era of a new version having patch notes like (paraphrased) "Of the four pointer types with special syntax, we've converted two of them to regular types with names and deleted the fourth" - it predated that much concern with backwards compatibility.
A relevant period of history here is the mutpocalypse, where the question was raised “why are we calling this &mut and teaching it as being about mutability when what we actually care about (for memory safety) is uniqueness and aliasing?” Had that side prevailed, &mut would have been renamed (the main candidates presented were &uniq and &only), and all bindings would have become mutable (`let mut` would have disappeared), because that concept wouldn’t make a great deal of sense any more. These are the only technical changes that it would have entailed (the biggest change would have been in pedagogy).
In the end, social factors and inertia trumped technical precision and consistency. I am strongly inclined to think this was a mistake. Mutability is easier to explain initially, but teaches an incorrect mental model that hinders a correct understanding of Rust’s approach to memory safety, falling apart as soon as you touch things like atomics, RefCell or Mutex, which mutate self despite taking it as &self, a supposedly immutable reference.
> Mutability is easier to explain initially, but teaches an incorrect mental model that hinders a correct understanding of Rust’s approach to memory safety, falling apart as soon as you touch things like atomics, RefCell or Mutex.
Could you elaborate or link to some material / discussion regarding that? I'd be very interested in how the alternative approach you're describing would change (or make unnecessary) constructs like RefCell.
The changes I described for the mutpocalypse vision are basically all there is to it, because &mut has always been a misnomer, never about being a mutable reference as its name replies, but rather about being a unique reference, that nothing else holds a reference while you have that one. The main thing the mutpocalypse sought to achieve was to adjust Rust’s syntax to match its semantics (including to remove mutability tracking on bindings, because that wasn’t part of Rust’s necessary semantics, and wouldn’t make as much sense after the trivial syntax change).
The thing I’m noting about atomics, RefCell and Mutex is how they have methods that (safely) mutate themselves, despite taking &self, a supposedly immutable reference.
(I’ve modified the final sentence in my original comment to clarify this.)
I feel like making one more note. I said that “mutable references” was a misnomer and that it’s actually about unique references, but even that’s a bit of a misnomer, because it’s not quite about uniqueness, but uniqueness of access. You can have multiple &mut borrows to the same thing, but only one of them is accessible at any given time:
let mut x = 1;
let y = &mut x;
let z = &mut *y;
*z += 1;
*y += 1;
assert_eq!(x, 3);
z and y both point to x, but only one is accessible at any point in time. Touching y finishes the z borrow; if you swapped the increment lines, it wouldn’t compile.
My memory is fuzzy (this was quite some years back), but I have a vague feeling that this lack of precision in the use of the word “unique” was a factor in some baulking at the proposed change. (“You’re trying to fix something that we admit is strictly wrong, but you’re not even making it right!”)
The pre-NLL version would need some additional scopes. In some ways the current borrow-checker is a lot friendlier (there are also some things possible today that weren't before) but it was also a simpler time, where one could easily imagine the various lifetimes. Getting started with the language was harder, but I think internalizing the borrow-checker was easier, because the rules were simpler and you were forced to learn them for anything more complex than 'hello world'.
let mut x = 1;
{
let y = &mut x;
{
let z = &mut *y;
*z += 1;
}
*y += 1;
}
assert_eq!(x, 3);
> Getting started with the language was harder, but I think internalizing the borrow-checker was easier,
So here's a funny thing: depending on what you mean, I don't think this is actually true. Let me explain.
The shortest way of explaining lexical lifetimes vs NLL is "NLL is based on a control-flow graph, lexical lifetimes are based on lexical scope." CFGs feel more complex, and the implementation certainly is. So a lot of people position this as "NLL is harder to understand."
But that assumes programmers think in the simple way. I think one of Rust's under-appreciated contributions is that programmers intuitively understand control flow graphs better than we may think, and may not intuitively understand lexical scope. Sure, by some metric, NLL may be "more complex" but practically, people only report it being easier to learn and do what they would naturally expect.
Hey Steve :-) I've been following and using Rust since early 2013 (so starting around the same time you did, when I compare our contributions to the compiler) and back then I definitely did not find the lexical lifetimes hard to understand. I remember also noticing an increase in "dumb" lifetime-related questions after NLL landed, seemingly caused by a lack of understanding of how they work.
Perhaps it's all just confirmation bias on my end, but I think truly understanding lifetimes was easier the way I learned it back then. That said, I have never bothered to write documentation for the Rust project, whereas few Rust contributors can claim to be in the same league as you in that area. We probably have very different perspectives.
Oh totally, I know you :) It's interesting how our perceptions are different though, I think a lot of the "dumb" questions went away since NLL. I wonder if there's a way to quantify this.
So, I think this is the thing: I also think that it was easier to learn lexically, personally. I too was worried that it would make things harder. But, I just don't think that's been demonstrated to be true across most people. It is, however, only my gut-check feeling for what I've seen. I could be wrong.
(And, the borrowcheck documentation is very minimal, and didn't really change with NLL, other than needing to add some println!s to make things not compile again. So it's certainly not because I wrote some amazing docs and explained it in a good way, hehe.)
I started in mid-2013 too, and my position is similar to yours. I'd characterise it like this: lexical lifetimes are easier to grok, but NLL turns out to be more practical, doing what people actually want (where LL didn't) enough that it overcomes the greater conceptual complexity, because you have to actually think about the concepts less often.
> falling apart as soon as you touch things like atomics, RefCell or Mutex.
Does it really fall apart? None of these constructs change the fact that if a &mut T is available, you can call functions on that type that take &mut self. The only thing that breaks is the assumption that non-presence of mut implies that there is no way to call those &mut self functions. If you change it to "non-prsence of mut implies that you need extra constructs like atomics, RefCell or Mutexes in order to get mutable access" it works out again.
The mental model of & being immutable and &mut being mutable, that’s what falls apart. Here are types that are taking &, and yet are mutating themselves.
This becomes a huge deal when people think that they have an immutable data structure just because they have an & reference to it. Someone will inevitably stash a mutex or similar in there because they just need to be able to mutate it this once… I mean these three times… I mean all the time. Oh, you wanted this immutable data structure so you could diff it for your VDOM or incremental calculations or whatever? Heh, guess you’ll have to find some other way or structure things so that mutations can be propagated through the object tree or something. And you’ll keep on having cache issues from time to time when people forget to jump through the right hoops. Sorry about that.
So yeah, the mental model of &/&mut references being about mutability is just completely and harmfully wrong. Rust does not have a way of guaranteeing immutability.
> The mental model of & being immutable and &mut being mutable, that’s what falls apart. Here are types that are taking &, and yet are mutating themselves.
&mut still implies mutability, and even the constructs like Mutex or RefCell still expose their mutability support via &mut references. Only atomics don't (they don't provide any &mut access). Note that you don't add const like in C/C++, you add &mut.
As for the VDOM caching thing, you have a point. But even if Rust had &uniq and &shared pointers, the challenge would be the same. Users might still use constructs like RefCell.
Mutex::lock() takes &self. RefCell::borrow_mut() takes &self. Those (and try_ variants) are the main ways people will use those types to gain access to their interior mutability.
Sure, both do have get_mut() which takes &mut self (though these came after 1.0), but that’s seldom how you’ll actually access it.
> But even if Rust had &uniq and &shared pointers, the challenge would be the same.
Remember again that the mutpocalypse wasn’t ever about changing the semantics, just about fixing incorrect labelling. I’m not saying the challenges would be different, just that &mut is false advertising, giving people the impression that & means an immutable reference, when it just doesn’t. &/&mut misleads people into thinking they can have immutable data structures, but you’d actually need an OIBIT (like Send) to achieve that.
> Mutex::lock() takes &self. RefCell::borrow_mut() takes &self. Those (and try_ variants) are the main ways people will use those types to gain access to their interior mutability.
Yeah they take &self, but that wasn't my point. My point was what they expose, which is MutexGuard in the case of Mutex, and if you want to modify the interior, you'll likely use the DerefMut impl which returns a &mut T. Same goes for RefCell::borrow_mut().
My point was that you don't mutate a &self reference directly in these instances, but turn a &self reference into a &mut self reference and then mutate. Again, atomics are an exception.
> just that &mut is false advertising
it is not false advertising, it implies mutable access. Had Rust been mut by default and had &T been &const T instead, I would agree with the false advertisement point. But thankfully it's non-mut by default.
Anyways, even if &mut were false advertising, and it were a bump in the learning curve, note that &uniq is such a bump too, because you have no idea what it says or implies initially. &mut makes it clear that if you want to mutate, you'll likely need a &mut somewher. But &uniq does not make this obvious, so you are required to learn something early on in Rust's comprehension, while &mut requires you to learn something later on. Generally, Rust has been criticized for having a steep learning curve at the start. In general I think it's worth it to learn Rust, but this wouldn't make it any better but worse.
RefCell allows you to get a mutable reference via an allegedly-immutable reference. That it goes via a guard type is quite immaterial. It breaks the model.
&mut implies & is immutable.
Rust is teaching an incorrect mental model, which I believe is the wrong mental model to teach. Sure, &uniq is not devoid of problems, because most of the time &mut will suggest the right thing for beginners. But the spelling &mut is categorically wrong at the fundamental technical level. It was never actually about mutability.
I agree with that thinking about it in terms of mutability is a bit missleading but I am not sure how your idea would fix the issues with RefCell and Mutex. Additionally I think that some code is clearer with enforced single assignment in the syntax.
Nitpick: "expensive" may be the wrong word there, to me that implies some kind of runtime cost. The stated reason for the keyword is that it forces users to think a little bit more about mutability. You may want do this because safe mutability requires exclusive ownership.
But that is cross-language, which is not ideal but not an internal problem if you work in one code base. (O)Caml also has `let` and is quite a bit older than JS.
C# uses “var” to mean “infer the type”, nothing to do with mutability or not.
“Let” feels like some in-joke, coming from a math via lisp heritage of “let k be any number...” to distinguish it in English from the surrounding writing. I have to guess that in early lisp (= k 5) would be an error as k isn’t defined and (k 5) errors because k isn’t a function and (let (k 5)) comes out of need for a binding function and why not “let”.
In c-like languages “k=5” is an established binding that both programmers and compilers can deal with. What does “let k=5;” add to Rust over “k=5;” ?
From the perspective of "language designers influencing programmers", the difficulty of auto-completing to `let mut` could be seen as something positive.
I think this would have been more true 10 years ago when Haskell and Lisp were the most likely place for people to have encountered "let".
Now? It's in Javascript in its ES6 evolution and related languages, Swift, Rust, and probably other "newer" languages I'm not aware of. Scala had it ten years ago also, but is more mainstream than it was then.
When assigning to val more than once: in a statically checked language the compiler will complain, and devs default to 'val' anyway. In a dynamic language you'd quickly get a runtime error.
Using 'var' instead of 'val' where possible: the compiler/linter will tell you this.
I think their reasoning for things that are shortened is that if it's something you will be writing a lot, it's nicer to have it shortened. So pub fn mut etc. are shortened, but return isn't
The impression left by this article is that Bjarne Stroustrup is doing a big mistake by continuing his work on C++. Nim / Zig / D are UFOlogists and who needs Racket / CL / Haskell anyway. Java could be found in ancient Egypt being served to mummies.
[1]: http://lambda-the-ultimate.org/user/825/track?sort=desc&orde...
[2]: http://calculist.org/
[3]: https://www.thefeedbackloop.xyz/