> Mozilla used Rust to build Stylo, the CSS engine in Firefox (replacing approximately 160,000 lines of C++ with 85,000 lines of Rust).
I would have loved to see two competing teams rewrite it, one in C++ and the other in Rust.
Saying that the rewrite is better does nothing to say WHY it's better. Refactored code is almost always radically better, even when it's in the same language, for reasons that might be insulting to explain on HN.
Point is, this is not an apples-to-apples comparison.
(I'm not arguing Rust isn't better. Nor that cutting your code base nearly in half isn't impressive. Simply that this single metric on their part seems like weak sauce.)
Mozilla already tried re-writing the CSS engine in C++ before. They failed. C++ simply wasn't usable for writing a parallel CSS engine.
"This top-down structure is ripe for parallelism; however, since styling is a complex process, it’s hard to get right. Mozilla made two previous attempts to parallelize its style system in C++, and both of them failed. But Rust’s fearless concurrency has made parallelism practical!"
It it could be written in rust then it could absolutely be rewritten in C++ as well, especially if it was already written in C. That said I find doing concurrent/async programming much more enjoyable/quicker in rust. Too bad it's not as widespread as c++.
So you are telling me that a parallel CSS engine can't be usable if written in C++ but large scale particle/molecule simulators, AAA game engines and other high-fidelity simulator environments are 100% C++ codebases?
Having worked in a AAA engine, with some of the things I saw done to make the performance tradeoffs I wouldn't want that code style anywhere near a security sensitive domain like the browser.
Typically simulators don't benefit much from extremely fine-grained parallelism; while you likely can simulate each entities behavior in parallel, and then step the world, you don't have fine-grained dependencies within one step (and likely many entities computations have predictable and balanced computational costs, further simplifying parallelism). AAA games were certainly until recently renowned for their poor usage of parallelism, though perhaps there are exceptions - but remember, large scale data-parallelism akin to simulations isn't the problem; it's problematic once there are dependency cycles and/or parallelism needs to be particularly fine-grained.
Finally, many games and simulators are written in clearly, clearly sub-"optimal" fashion. Games are about fun; not necessarily about extracting maximal possible performance - and compared to browsers, even to minor players like Firefox, even large games have limited usage and especially limited lifetime usage, and they're likely less sensitive to security issues too. As a result, when a game's usage of of parallelism has a few very unlikely race conditions that might not be a show-stopper - whereas it could be an exploitable flaw in browsers.
All in all, it's extremely plausible that games+sims aren't quite as dramatically impacted by C++'s risks as a browser's styling engine is. It's perhaps no coincidence that the language was pretty much designed for it.
When the time cost of getting it right and _keeping_ it right exceeds the amount of time actually available in a day, the result is equivalent to "the language is not up to the task".
I mean, you can implement anything you want in assembly, in theory. In practice, the cost of iteration tends to be high enough that implementing things with somewhat rapidly changing requirements (which includes CSS, to be clear!) pretty much requires a language in which iteration and refactoring can be done more quickly than in assembly.
Maybe the confusion on your part is that you think "CSS" is a static set of requirements, so if you put in enough effort to implement it once you're then done? Unfortunately, that's not how it works in practice.
The fact that we disagree does not imply "confusion on my part".
> When the time cost of getting it right and _keeping_ it right exceeds the amount of time actually available in a day, the result is equivalent to "the language is not up to the task".
Again, this is referring to the particular approach that the Mozilla team took to the problem. If anything, one of the most valid criticisms of C++ is that the API is too large, and it has too many tools to work with at the expense of clarity. C++ allows you to express just about anything, and I do not think the standard of evidence has been met that the problem of writing a CSS engine could not be expressed in C++ in a way which would be both performant and maintainable.
C++ has been used for many complex, high-profile projects which are widely used. There is nothing magical about CSS which creates problems C++ is incapable of solving.
In the limit, you could implement a DSL in C++ that would allow you to implement CSS.
Or you could implement Rust in C++.
As you say, you can do whatever you want in C++. The only question is how much time and effort it would take and whether it's worth it.
I will note that the people who worked on the Firefox style system had plenty of experience working on complex, high-profile C++ projects. Every single one of them that I've talked to agreed that for the specific thing they were doing here Rust allowed much faster development of code with fewer bugs (especially as measured by how many issues the fuzzers found) than their past experience with C++ had been.
If your point is that the claim should be "parallelizing the style system in C++ was impossible within the schedule and manpower constraints Firefox was operating under" rather than the simpler "was impossible" claim, then sure, as a purely logical-statement matter. But in practice there are always schedule and manpower constraints.
Can I prove mathematically that some other team would not have been able to achieve the same results in the same amount of time with C++? No, I can't; such a proof would be quite difficult to construct. I do have the empirical observation that there used to be at least four fairly different modern non-parallel CSS implementations out there written in C++ (Gecko, WebKit, Blink, Edge), and that one of them has disappeared (Edge), one was parallelized in Rust, and the remaining two remain in C++ and non-parallel, even though people generally agree that parallelizing them would be good and so forth.
Maybe it's just that none of the organizations involved are any good at writing C++ code. Maybe it's that doing this in C++ is hard enough that it's not worth the effort. Maybe it's something else. What is your hypothesis on the matter?
> parallelizing the style system in C++ was impossible within the schedule and manpower constraints Firefox was operating under
So my main issue with this claim is that we are talking about proving the negative. Not to get too pedantic, but the reasoning seems to be:
1. Mozilla tried and failed to build a parallel CSS engine in C++
2. Mozilla succeeded in building a parallel CSS engine in Rust
3. Therefore, it is impossible to implement a parallel CSS engine in C++ (under the constraints Mozilla was under).
This is simply not a valid argument. Evidence that something did not happen is not proof that it could not happen.
There are plenty of claims I would accept:
- The Mozilla team found Rust much better than C++ for solving their problem
- Many of the problems the team was running into with C++ were completely eliminated by the constraints provided by Rust
But saying this problem is impossible to solve in C++ is an extraordinary claim which needs extraordinary evidence. C++ is an extremely flexible and powerful language, to say that no one could ever design a solution to this problem using C++ requires a very limited imagination.
> there used to be at least four fairly different modern non-parallel CSS implementations out there written in C++ (Gecko, WebKit, Blink, Edge), and that one of them has disappeared (Edge), one was parallelized in Rust, and the remaining two remain in C++ and non-parallel, even though people generally agree that parallelizing them would be good and so forth.
This is a very small sample size. If I have to give a hypothesis, I can imagine that having a very large, very mature code base like Firefox would have placed a lot of design constraints on any rewrite of the CSS engine. I can imagine that working within this framework, there may have been many problems related to concurrency and data ownership which ended up eating a lot of time and energy in C++, and the team saw a huge productivity boost categorically eliminating these problems. But I simply do not believe it's the case that it's not possible to decompose the problems a CSS engine has to solve into a set of concurrent data structures and operations, which could be expressible in either Rust or C++.
That's all fair, and your hypothesis is pretty much spot-on in terms of what happened. The huge productivity boost was basically key to the success of the project.
I'm not sure whether the argument now comes down to "it would have been possible to gain that sort of productivity boost in C++ too, with the right design" or "it would have been possible to implement a new CSS engine without this sort of productivity boost". If it's the former, then you're right that one can't prove a negative. If it's the latter, then I think that's where the "schedule and manpower constraints" come in.
Parallelism is hard in both Rust and C++. When I think about low level parallelism, like in the case of rendering, I believe that Safe Rust will only get you so far perfomance-wise. To get the best speed you will still need unsafe. I'm wondering how their engine fares compared to Chrome.
It's entirely a myth that unsafe is needed to get speed boosts in Rust. Usually, the standard library, as well as other crates, offer optimal performance without resorting to unsafe code.
(If we go pedantic, Vec and other primitives do rely on unsafe, but the point is as an application developer you don't have to write unsafe code yourself.)
It's not entirely a myth. There are situations where you can't get the compiler to emit optimal code using only safe Rust. You can go a very long way with only safe Rust (and i do!), but not everywhere.
There are also situations where you cannot get a C++ compiler to emit optimal code without resorting to intrinsics or inline Assembly.
99% of Rust and C++ code performs well without going that deep.
Unsafe rust exists for a reason. There are cases where perfectly safe code cannot be expressed according to Rust's ownership rules, and an escape hash is needed. Of course the "average" developer probably should not resort to unsafe as a rule of thumb.
Also regarding the standard library, as far as I understand it's not entirely true that it never resorts to unsafe rust. For example, I understand that the standard library makes use of specialization, which remains an unstable feature because of a soundness hole in the implementation.
> Also regarding the standard library, as far as I understand it's not entirely true that it never resorts to unsafe rust.
I think you're misunderstanding the parent comment. The stdlib constantly resorts to unsafe code. Tons of methods like `split_at_mut` and `make_ascii_uppercase` are just safe wrappers around an unsafe one-liner.
Rather, the observation is that _with the benefit of the stdlib and common crates_, most programs have no performance reason to reach for unsafe in regular code. That's true in my experience.
I agree that most developers don't need to bother with unsafe code, and will not pay a performance penalty for staying within safe rust. My only point is that it's not necessary to be so dogmatic as to say that no-one should ever write a line of unsafe rust.
For instance, if you read the rust book, they make references to times you might want to use unsafe:
> Borrowing different parts of a slice is fundamentally okay because the two slices aren’t overlapping, but Rust isn’t smart enough to know this. When we know code is okay, but Rust doesn’t, it’s time to reach for unsafe code.
In my experience most of the time I had to use unsafe was to bind unsafe interface (think something like making raw OpenGL bindings for instance, you'll have unsafe code all over the place and you'll have write your own safe wrappers around it). Performance-wise I very rarely find myself having to compromise, although there are times where I have to write "smarter", more complicated code to get good performance without unsafe.
I just looked at the code of a pretty heavily optimized program I've been working on for a few months, I have exactly one instance of unsafe in the code:
I need the unsafe because at the moment the language is not smart enough to understand that 0xf610 is obviously non-zero and that I can build a NonZeroU16 from it without fail (at least I don't know how to express this in static expression at the moment). This has no performance implications whatsoever.
For that last one: it's much faster in many cases, because Chrome's engine is not parallelized.
Back when I was last measuring this, stylo's single-thread performance was comparable to Chrome's or a bit faster in some cases. Parallelized performance was much better.
That said, actual hardware in the wild has a surprisingly low level of hardware parallelism in practice. https://data.firefox.com/dashboard/hardware shows that for the Firefox user base as of end of Jan 2021 54% of users had 2 cores and 34% had 4 cores.
> That said, actual hardware in the wild has a surprisingly low level of hardware parallelism in practice. https://data.firefox.com/dashboard/hardware shows that for the Firefox user base as of end of Jan 2021 54% of users had 2 cores and 34% had 4 cores.
True, but that's probably about to change soon. Look at the ARM stuff about to enter the arena.
I wouldn't bet on us using just 2-4 cores 10 years from now. And we need to plan for that.
Well, right, that's why the Firefox CSS implementation actually aims to be parallelized and has been for years. I'm just saying that the wins from that on current hardware are not as large as one would hope. And that the increase in number of cores has not been increasing nearly as fast as one would like, in devices that most people actually own.
Yeah, it's a chicken and egg problem. Old software doesn't use many cores, and even new, single threaded one doesn't (Javascript, Python, Ruby, etc.).
Because software doesn't really need many cores, no need to create CPUs with many, many cores.
I imagine the cycle's going to break at some point, after all we can only ignore having many cores for so long. Especially that now even mobile devices routinely have at least 4 cores.
Mobile devices in the field typically have _more_ cores that desktop devices. The Firefox hardware report is strongly influenced by there being a lot more desktop than mobile Firefox users.
As you note, the modal number of cores on mobile has been 4-6 for a while now.
Fwiw, I've seen a number of cases of the other way around: the WebKit/Chrome CSS implementation had some shortcuts for performance reasons that caused them to not follow the spec, where Firefox did.
The link you posted just indicates that Firefox rendered the page faster than Chrome; did it render it _wrong_?
One other note: the linked issue here is layout, not the CSS system per se; this part is not parallelized in Gecko and is still in C++, not in Rust.
Rust is pretty damn close to the fastest C++ version without any unsafe (but using a bunch of generic libraries which may have unsafe code, to be perfectly fair).
Note that the winning C++ entry is also vastly more complex code and I wouldn't want to have to maintain that.
In my experience all of those simulators and game engines have really egregious bugs that go undetected. C++ is fine if you don't need correctness (which is actually most of the time, in practice).
> C++ simply wasn't usable for writing a parallel CSS engine.
I think it can be said that Mozilla was able to use Rust to achieve what they failed to with achieve with C++. It's not justified to claim that C++ is/was not suitable for this purpose.
C++ offers direct, low-level control over memory, and C++ can be used to implement anything which can be implemented in Rust. Maybe Rust has certain advantages which made this problem a lot easier to solve for Mozilla, and I'm certainly happy to have Rust in the language landscape, but another team with other processes may have been able to use C++ to achieve similar goals.
One thing that software developers frequently forget to do is to be humble.
It is entirely possible that the best developers on this planet using the best tools and methodologies, processes, whatever, are just not smart enough to do some things.
Considering how many things we have achieved, we should just acknowledged this human attribute: imperfection.
If our tools require us to be perfect, maybe they're just not good tools for some goals.
I'd be shocked if Mozilla didn't have some of the best C++ programmers on the planet, and if they failed a few times, that's a pretty strong argument against C++.
Who knows, maybe they'll be proven wrong and another team working purely in C++ will achieve goals such as theirs. I wouldn't bet on it (though C++ is evolving so who knows how C++ will look like in 20-30 years, never say never).
The claim was that it couldn't be done, as in not possible. Not that it's not a lot easier because of ground up design for concurrency in rust, maybe even an order or two of magnitude easier. However to claim it can't be done in C++ at all is silly. To claim it couldn't be done by the team working on it with the given time frame and resources is a legit claim however.
> C++ offers direct, low-level control over memory, and C++ can be used to implement anything which can be implemented in Rust. Maybe Rust has certain advantages which made this problem a lot easier to solve for Mozilla, and I'm certainly happy to have Rust in the language landscape, but another team with other processes may have been able to use C++ to achieve similar goals.
You could say the same thing about assembly programming or some language that uses GOTOs instead of structured control constructs ... or Brainfuck. Given infinite time and resources, you can implement anything in any language.
Claims about this or that language being unsuitable for a problem aren't really about stuff like Turing completeness, it's more about practical software engineering problems (e.g. does the language provide the right kind of support to the developers allow them to implement the required functionality in a reasonable amount of time with a reasonably small amount of bugs with the budget available).
> You could say the same thing about assembly programming or some language that uses GOTOs instead of structured control constructs ... or Brainfuck.
I think this a reduction of the argument to the point of absurdity. C++ and Rust are certainly much more similar in terms of form and capability than C++ and brainfuck.
> Claims about this or that language being unsuitable for a problem aren't really about stuff like Turing completeness, it's more about practical software engineering problems (e.g. does the language provide the right kind of support to the developers allow them to implement the required functionality in a reasonable amount of time with a reasonably small amount of bugs with the budget available).
I agree. C++ may very well have been unsuitable for Mozilla at the time they adopted Rust. What I take issue with is the general claim that C++ is not suitable for a parallel CSS engine at all. This is a very strong claim, and I do not believe one team coming to this conclusion for them is enough evidence to conclude this in general.
> C++ and Rust are certainly much more similar in terms of form and capability than C++ and brainfuck.
From the point of view of somebody creating a highly interdependent parallel system, no, it's the other way around. On the features that matter to parallel systems, C++ is much closer to brainfuck than it's to Rust.
> I agree. C++ may very well have been unsuitable for Mozilla at the time they adopted Rust. What I take issue with is the general claim that C++ is not suitable for a parallel CSS engine at all. This is a very strong claim, and I do not believe one team coming to this conclusion for them is enough evidence to conclude this in general.
I know very little about rust, so take this with a grain of salt. It's possible that writing the CSS engine in C++ would require too many custom classes, containers, and build tools outside of the STL than equivalent code in Rust. For example, fast mutexes and mutex safety are not a standard part of C++ but vital to fast parallel C++ code, and the semantics of mutexes are thus very hard to get right. Static checkers for specific mutex implementations can decrease the risk of deadlocks and races but these are additions to C++. If rust provides primitive mutexes with static checking for correctness or other language features (maybe refcell or similar) then it might be worth using rust to stay within the standard language features instead of building add-ons for C++.
A steak knife makes a poor scalpel but with enough effort you could probably perform a kidney transplant with one. It would just take 5 times at long and the patient would be at serious risk of bleeding out the entire time.
You could in theory write the CSS engine in assembly too. A theoretical team, somewhere, could probably manage it. But given better options, it is not a reasonable thing to do.
You mean Mozilla, the organization that has maintained a 6 million LOC[1] C++ codebase continuously for nearly 20 years?
If they can't pull it off with C++, I don't care who can, C++ is not the right tool for the job and you can file writing such a thing in it as an esoteric programming challenge, together with template Tetris.
- how does the performance of those fare against Stylo?
- the developer of Chrome is so insanely rich that only a tiny tiny fraction of their budgets in practice finance all of Firefox.
Summary: Mozilla manages to make an equally good or better CSS engine with a lower budget?
(I admit Google might be running the Chrome team on a shoestring but my bet is they don't as coming in a position were they can kill ad blocking really would be a fantastic strategic advantage for them and would be worth billions a year.)
Saying Rust allowed Mozilla to achieve a better result for less money is a much softer claim than saying C++ is an unsuitable language choice for this problem, which is what I was responding to.
The second is the claim I would dispute. The former may be true.
You are absolutely welcome to dispute it, but the fact of the matter is that an engineering team with the expertise, history and resources to write at least two iterations of a CSS processor did their homework and chose to rewrite it in Rust, realizing significant performance gains across a smaller code-base. The advantage to that team being Mozilla is that alot of the discussions that lead to these implementations are captured in meeting documents, bugzilla tickets, other places, like this blog post explaining why and how they got the gains they did: https://hacks.mozilla.org/2017/08/inside-a-super-fast-css-en...
If you want to dispute the claims, do the research, otherwise it's just your opinion against the actual implementation that is shipping to millions of devices.
No, I am saying there is nothing intrinsic to the problem of a CSS engine that makes it intractable in C++. C++ is the most used language for highly parallel, HPC applications. I don't see why there is anything magical about CSS which would prevent a c++ solution from also existing here.
I'm not arguing that Rust was not the best choice here for Mozilla. I personally would also rather work with Rust than C++ in almost all cases. But to claim that C++ is not usable to solve this problem, based on the fact that Mozilla decided to use Rust instead is not a sound argument.
Yes, but that reads like a put down for Mozilla. That hypothetically a team of guys earning like half a million a year in a big hedge fund could be recruited to build precisely a parallel CSS engine if they work in it for a year or two is not a good argument for using C++ for this either.
It's not a put-down to Mozilla at all. I'm sure they chose Rust for very good reasons, and they achieved a great result. I'm not even arguing that C++ should be used for this. I'm just saying it's absurd to claim that C++ could not be used for this if you really wanted to.
I agree that the claim made by the parent might have been a bit too strong and not backed by enough evidence, but saying that C++ can technically do the same thing as Rust is not really getting us anywhere. You can say the same thing for ASM.
Can, in practice, a team write and maintain a modern CSS engine in pure assembly? In theory yes, in practice I expect the overhead to be massive.
I think the key feature of Rust for such an application is "fearless concurrency". Writing concurrent code in C or C++ feels like walking a tightrope, in Rust I know that I can lean on the compiler to tell me when I'm doing something wrong. Sure, you can still have data races, but in my experience they're generally pretty simple to debug when they happen because the type system and borrow checker force you to have very explicit data dependency graphs.
I avoid threads like the plague in C and C++, but in Rust I never hesitate if I feel like spawning a worker might improve performance or make the overall architecture simpler.
C offers direct, low-level control over memory, and C can be used to implement anything which can be implemented in C++. Maybe C++ has certain advantages which made this problem a lot easier to solve for Mozilla, and I'm certainly happy to have C++ in the language landscape, but another team with other processes may have been able to use C to achieve similar goals.
Assembly offers direct, low-level control over memory, and assembly can be used to implement anything which can be implemented in C++. Maybe C++ has certain advantages which made this problem a lot easier to solve for Mozilla, and I'm certainly happy to have C++ in the language landscape, but another team with other processes may have been able to use assembly to achieve similar goals.
The biggest advantage of Rust is that the liner types and the borrow checker forces one to structure the application in a way that is much more manageable long-term and less error-prone even with occasional sprinkle of unsafe code.
Surely one can code in such style in C++, but it is very unnatural there with a lot of boilerplate, so one just would not consider it typically.
You mean "affine types". Linear types are required to be used while Rust allows one to manually drop a type. This allows you to get the next state and do nothing with it while a linear type would compel you to move to the next state.
Assembly offers direct, low-level control over memory, and assembly can be used to implement anything which can be implemented in C++. Maybe C++ has certain advantages which made this problem a lot easier to solve for Mozilla, and I'm certainly happy to have C++ in the language landscape, but another team with other processes may have been able to use assembly to achieve similar goals.
Any language can be used to implement a program written in any other language, and there is a simple proof for that - every program is a finite automaton and every finite automaton can be implemented using while and switch constructs.
In this case possibly the best use of C++ was to write the LLVM compiler which could then be used to implement a DSL (Rust) that is better suited to the problem at hand.
Making two teams compete like that is a horrible idea. Both teams know that no matter how hard they work, their work may be thrown away. I've heard horror stories like this in a post mortem about the development of Sonic 3D for Saturn back then for example. There were two teams tasked with making a sequel, and only the better one would be released. Management's idea was that it was supposed to be a flagship title and had to be good. Both teams worked themselves to exhaustion and almost everyone quit the company, the rest got fired due to failure.
Even if the above doesn't happen, it is hard to say what you learn from two teams. Maybe one team is worse. Maybe one team got lucky because the serious bug was 1 in a million and not found, while the other spent weeks tracking down a similar level of complexity bug that happened every time and so had to be fixed... I can come up with more.
The only thing fair is to give one team 6 months (ideally with a couple contractors who are experts to teach them best practices) to try the new thing. Less than 6 months and they are not far enough along the learning curve to make any useful statements. If you want a work done comparison you need to give half your teams (minimum 20 teams, ask a statistician for more details if you want to go lower or could go higher) that 6 months, and then after that give them a year to use the new one in real work, and compare that final year.
Totally fair @eska and @bluGill. I really wasn't thinking in terms of "This is the right way to manage a project." I was simply trying to identify some way to do a better and more accurate comparison between Rust and C++.
You're perfectly right that it would be awful to be pitched against each other in this way.
So I recant my suggestion, but maintain my critique about using lines of code in a single project as somehow validating a language choice. There are too many variables. And, more to the point, the refactored code will almost always be much better, regardless of the language selected.
Deathmatch programming doesn't sound very pleasant.
But I hope most people here don't feel a morale hit when their code gets tossed. Inherently, I understand the desire for one's creative work to live on, but that's essentially incompatible with this industry. And I also have found much joy in turning-down/deleting existing services I've written. My team celebrates these events and it's always fun to give that high-maintenance service a viking funeral.
It could very much be from the second system effect though. A rewrite in C++ from a competing team would provide a control for that.
As many others have mentioned though, the real kicker is that they did attempt a rewrite twice in C++ aiming for the features of the rust one; those didn't work out.
> Refactored code is almost always radically better
Sometimes it is, but it is hardly the rule. New issues will always be introduced. Also, some developers get into an endless cycle of unnecessary refactoring just to use the latest and greatest of some library or framework for no real technical reason.
I'm not saying refactoring is always worthwhile (it usually isn't, in my experience anyway).
But the refactor is generally done with the benefit of hindsight of all the things you would have done differently if only you'd known then what you know now.
It's like saying that the second draft or edited manuscript of a novel isn't going to be better than the initial draft. Of course it's better the second time. In theory, you fixed what was wrong with the first one and implemented new and/or improved ideas.
If your refactored code is worse than the original, I feel like you're doing something wrong.
Actually that's a really good question :) . I would be interested if Rust's vaunted security advantages panned out. I know anything can have bugs but still it would be interesting.
I started following it when he first started working on it back then, but I actually lost (some) interest once it became more about the memory safety features.
It's not that I don't think the borrow checker etc. stuff is really awesome. I just find the original concept very appealing and incremental. Essentially an OCaml for systems level programming.
A sane C++.
I feel like Rust is making its moves slowly into that territory, but that the memory safety pieces have made it more difficult, even if the long-term payoff is better.
"It's very similar to earlier versions of rust. In terms of actor local GC, language provided scheduling of actors, separated function and async invocation system, mixed structural and nominal types, and an overabundance of language supported reference qualifiers. I think they did a good job at it, certainly much better than I did, but I'd be nervous about the cognitive load of the reference qualifiers (capabilities). See this page for example, or the previous few in the tutorial. We had that sort of cognitive load on variables early on and people basically rejected the language because of it, as well as associated compositionality problems. The many years of design iteration had a lot do to with factoring that space into fewer, more essential and general qualifiers. Slow, tedious, world breaking refactoring work. I hope for pony's sake this doesn't happen to them too -- it looks really nice -- but that's my biggest worry for it."
1. Control of memory layout is a big one. Ocaml has a very lisp-like object representation except without a cons-cell special case. If you try to directly translate some ocaml to rust (and the types are ok), you will gain from having fewer pointer dereferences and likely more cache-friendly shallow data structures in the standard library. There are some plans to make the compiler better for layout control (look for “unboxed types”)
2. Parallelism. There is currently a python-like lock so that only one thread may run ocaml at a time. There is a long-running project (multicore) to fix this
3. (Opinionated) lack of control of mutability: it is easy to have everything immutable or an object with fields that are always mutable, but you can’t easily have something like rust’s mut, which only sometimes allows mutation. Some changes to this may come with/after algebraic effects which come after multicore, but these are mostly about avoiding atomically where possible rather than making things const.
4. The compiler just isn’t as good as a c++ compiler and it needs to be better to undo a deeper stack of abstractions. I think it’s particularly bad for not really doing monomorphisation. On the other hand, it’s fast.
5. (Opinionated) it isn’t great for writing generic code, but maybe modular implicits will fix that.
That said, it is possible to write fast systems level programs in ocaml (you can be careful and avoid allocation in critical sections, or just have gc because for most systems it doesn’t matter that much. You can split things into multiple processes if necessary.) You could look at mirage for an example of something big and low level written in ocaml.
Having taken a stab at getting into that language, I can tell you the package landscape is freakish. Either you get on the Jane Street Train, or you slum it with packages from 2003 that don't have support (and probably dont build anymore anyway) Good luck getting things working on Windows. Honestly even Haskell is a better choice in that language space even though its getting eaten by Rust (and for good reasons).
I'm sure for some uses it would be fine. But typically the concern is garbage collection. Manual memory allocation and the ability to deal directly with pointers etc. is important for driver and OS development, as well as embedded systems work.
The GIL is not problematic if one uses processes, not threads, for parallelism. And for system-level programming it may be even better long-term solution as one can sandbox child-processes.
If you can stomach a GC enabled by default, a D is a good attempt at sane C++. The syntax is instantly familiar for a C++ programmer, and the templates are very similar to C++ (although saner).
I really feel like D is in a tough spot these days. In the early 2000 I remember being interested in it but I dismissed it because of the licensing issues that existed back then.
Now it's sorted out but I think it crippled the language early on and now it's sandwiched between other "managed" languages with far better adoption and better support on one side (Java/JVM languages, C#, Go) and low level system language with better adoption and no GC on the other (C++/Rust in particular).
While I personally find D interesting and vastly better designed than some of the competition (cough go cough) it's very hard for me to imagine it getting big in the mainstream without some massive industry backing.
Rust hasn't had a much bigger adoption until a while ago. It had a better open source presence perhaps, but not necessarily adoption. It changed now obviously.
I guess the question is, does it really need to be mainstream to be usable? As long as it works for the usecases you need it for, popularity isn't a necessary factor.
I have to say though that D has some issues that prevent it's wider adoption, both historical and current. Some are obvious, such as unnecessary two standard libraries conflict (I missed out on it, but the OOP person in me would have preferred Tango over Phobos).
C++ has massive adoption and billion of lines of codes in active use. IMO Rust is already superior syntax-wise in almost every area but C++ is going nowhere any time soon.
I'm sure some C++ codebases will outlive everybody posting in this thread.
I know Mozilla played a big part w.r.t Rust, but it's strange to see this on the front page... we had this 2 days ago https://foundation.rust-lang.org/posts/2021-02-08-hello-worl..., and now we have a low-effort blog post front-paging that is welcoming the foundation.
Mozilla also played a big part in this foundation being necessary after it laid off most of its employees involved in Rust last year. The foundation is a good idea of course, but they could have handled it better, e.g. setting up the foundation first, so that the Rust developers could transition to it without the disturbance and bad PR generated by the layoffs? It's not that Mozilla is so cash-strapped that it had to get rid of them immediately (at least not as far as I know)? Or was this the kind of drastic action some parents have to take in order to get their kids to finally move out?
> setting up the foundation first, so that the Rust developers could transition to it
I don't think the Foundation ever plans to employ so many people to work on the language. Most of the people who left Mozilla now continue to work on Rust at Amazon, Microsoft and Facebook. By no means am I defending Mozilla here, just saying that the Foundation wasn't the solution to employ 10+ engineers working full time on the language.
Yeah, of course a Big Tech company would rather employ a developer and allow them to contribute to Rust (and influence its development in the direction of the company they work for) than donate money to the foundation so that it can pay developers. Silly me...
Those companies are also donating to the Foundation in cash and kind (AWS and Azure hosting).
The developers who have newly joined these companies have years of contributing to the Rust project. I'm 1000% confident that they wouldn't land anything that wouldn't be welcome by all users. For example, one engineer at Amazon is working on making it easier for Amazon engineers to learn the language by improving the error messages. Those improved error messages benefit everyone.
This also exists in the broader context of tremendous growth in CEO pay over recent decades. There are widespread claims that executive pay is inflated. In one sentence:
> Importantly, rising CEO pay does not reflect rising value of skills, but rather CEOs’ use of their power to set their own pay.
In that context, it's absurd, to put it mildly, that she said this quote in her Jan 2020 layoffs letter:
> You may recall that we expected to be earning revenue in 2019 and 2020 from new subscription products as well as higher revenue from sources outside of search. This did not happen. Our 2019 plan underestimated how long it would take to build and ship new, revenue-generating products. Given that, and all we learned in 2019 about the pace of innovation, we decided to take a more conservative approach to projecting our revenue for 2020. We also agreed to a principle of living within our means, of not spending more than we earn for the foreseeable future.
[NOTE: per the link in the parent comment, "Mozilla's 2019 expenses came to $495.3 million, or almost $5 million more than revenue."]
> This approach is prudent certainly, but challenging practically. In our case, it required difficult decisions with painful results. Regular annual pay increases, bonuses and other costs which increase from year-to-year as well as a continuing need to maintain a separate, substantial innovation fund, meant that we had to look for considerable savings across Mozilla as part of our 2020 planning and budgeting process. This process ultimately led us to the decision to reduce our workforce.
Yeah, (re)reading that reinforces my opinion that Rust is better off being supported by a foundation - provided of course that the foundation is better managed than Mozilla...
Indeed there were also announcements from Microsoft[0], Amazon[1] and Google[2], all of which were posted on the main thread. Are we going to have a new thread for each of them?
Threads on the same subject are often deleted, conversations are moved between threads, and main thread links are changed to better ones. It's about cultivating the conversation, not democracy for democracy's sake.
Thank you for this. While the announcements themselves are welcomed, a lot of detail of the actual structure of the foundation is missing or perhaps not clear.
Something feels strange about the founding members of the Rust foundation. It looks like Rust is going to have a similar structure like the C++ committees or even the Linux Foundation (which Servo is now part of) given how big tech companies seems to have a wide presence in these technologies and these foundations which is fine for usage, but very worrying in terms of having seats at the board with the potential risk for them to drive that technology in their interests and we have no say about this.
I feel these announcements have glossed over this important aspect.
A significant difference between the C++ committees and the Rust Foundation is that the RF has no technical decision making capacity. The language team is the sole governing body of the Rust language, and they are not a part of the foundation.
Amazon and Microsoft are building internal teams to work on the compiler, so the budget of the foundation is not necessarily representative of the total financial investment by these companies.
I hope that Amazon and MS will not represent the majority of investment in Rust's development. The incentive structure for an independent foundation is favorable for a variety of reasons.
I guess that depends on where the engineers are, and how much they are paid - I've never known anyone (in coding) to be paid > £100k (~$140k) - but then maybe I move in the wrong circles!
So you could certainly get more than 4 engineers on board, plus some project managers etc
> I've never known anyone (in coding) to be paid > £100k (~$140k) - but then maybe I move in the wrong circles!
In London, >£100k is easy for contractors but I think you'd have to have some managerial responsibilities to get that as a perm (team lead or upwards.) But I may also be moving in the wrong circles...
> In London, >£100k is easy for contractors but I think you'd have to have some managerial responsibilities to get that as a perm (team lead or upwards.) But I may also be moving in the wrong circles...
It's possible, but unusual, for base salaries for individual contributors to be over £100k, especially if you look at both FAANG/adjacent companies and financial firms (both investment banking and at hedge funds). If you include bonuses and stock compensation, those same groups can relatively easily hit £100k.
I'd personally never consider those in salary calculations - been burned too many times. But yeah, I guess if people are getting bonuses and other compensations, it could well boost them over £100k.
With 401k, healthcare, I have heard that a million bucks gets you 3 engineers. Or in the case of layoffs you get a million dollars worth of savings per year per 3 engineers laid off.
The UK is, from what I've seen, a really low-paying market for IT and software engineering. At the big FAANG companies, you can make more than that as a new graduate (pre-tax, cash compensation only). I don't like generalizing on the FAANGs, because they are not the majority of the market, but even outside of NYC/Silicon Valley, developers can clear $140k after a few years of experience.
Without a doubt, and it's a trade that I would make, but considering how important London is as a labor market, and how expensive it is to live there, I still think that UK business underpay their tech talent.
That's like one, _maybe_ two full-time developers in SV, at least of the sort you'd want to steward a project like Rust, plus a tiny bit left over to maintain the rest of the foundation.
A lot of it depends on the scope of the foundation, and what it's expected to do. In many projects the foundation exists mainly to own and protect IP, rather than actually fund any development. Stewardship doesn't necessarily mean taking an active role; it could just mean ensuring the right people are in the groups that make the decisions.
I would assume that that budget is mostly for central organisation expenses rather than paying for development. Things like legal fees, paying for CI infrastructure etc. The development work would continue to be done by people being paid by various companies (or in their free time).
>Just last night I was thinking about how it was possible that, given the relative trends, Mozilla’s greater legacy might turn out to be Rust, not Firefox.
Give Mozilla credit where credit is due instead of reflexively throwing trite criticisms.
Yes they have their fair share of problems, but we owe them a thanks for its foundations at Mozilla and their help in bringing it to where it is today.
On its own, it is a great move for Rust, more independence, but this sounds more like a new title for marketing PR purposes than anything else.. hope i'm wrong, rust needs it