Hacker News new | past | comments | ask | show | jobs | submit login

HN: "Mozilla has too many side projects that don't make the browser better"

Also HN: "Mozilla should spend more than a decade and tens of millions of dollars on a brand new browser engine that has no hope of replacing Gecko before it reaches 100% compatibility with a spec thousands (tens of thousands?) of lines long, not to mention the kind of "quirks" you see with websites in the wild, while they already lag behind Google with the browser engine they already have."

People like cool R&D projects, and that's understandable - I like Servo too. But the fact that it was really cool doesn't compensate for the fact that it was not going to be production-ready any time soon and in that light it's understandable why it was cancelled. While some parts of Servo ended up being so successful that they were merged into Firefox, a lot of what remained only in Servo (and not in Firefox) was nowhere close.

The layout component was by far the least mature of any part of Servo at the time (unlike Stylo and WebRender, I mean) and in fact it was going through the early stages of a brand-new rewrite of that component at the time the project was cancelled, partly because the experimental architecture ended up not being very suitable.

https://servo.org/blog/2023/04/13/layout-2013-vs-2020/




> that has no hope of replacing Gecko before it reaches 100% compatibility with a spec thousands (tens of thousands?) of lines long

When Servo was still managed by Mozilla, they were able to merge some components incrementally into the Firefox. Most famously, Stylo and WebRender were first developed in Servo. They could have kept Servo for experimentation and merge parts incrementally.

It may also have enabled better embedding supporting which is a weak point of Firefox compared to Chrome; which is a long-term solution to remain relevant.


I covered that. Sure, Stylo and WebRender were successful enough that they made it into Firefox, but the Layout component was very much not. Servo was in the middle of a clean-slate rewrite of the layout component because the initial architecture chosen in 2013 wasn't very good.

The CSS engine and rendering engine are a lot easier to swap out than the remaining parts.

Again, I get why people like Servo, but "in 10 years, maybe we'll be able to take on Electron" isn't that great of a value proposition for a huge R&D project by a company already struggling to remain relevant with their core projects.


> "in 10 years, maybe we'll be able to take on Electron" isn't that great of a value proposition

Perhaps not, but "in 10 years, we'll have a browser that's significantly faster and safer than the competition" is how you plan to still be relevant 10 years from now.


The browser engine is not what makes Firefox "relevant" or not. Their competitors are Apple, Google and Microsoft. The marketing budget for Chrome is larger than Mozilla's entire budget. "Google" is synonymous with the entire internet for a large fraction of the non-technical population. Every device you could buy on the market whether a PC, a tablet or a phone has one of their competitors browsers already pre-installed.

Their primary leverage is unique features and functional adblockers, neither of which is impacted by the layout engine.

And again, you're taking away resources from something that is already behind right now. The canonical example of massive long-term rewrites being a bad idea for the business is literally the precursor to Firefox. Gecko can be refactored in-place, including into Rust if they decided to do so.


> Their primary leverage is unique features and functional adblockers, neither of which is impacted by the layout engine.

Yes, unique features like being written in a memory safe language and depending on memory safe implementations of image and video decode libraries are exactly what I care about in an all-knowing sandbox which touches network services and runs untrusted code on my computer.

> And again, you're taking away resources from something that is already behind right now.

Disagree. You're talking about every Mozilla project that's not Servo. Firefox/Servo development is Mozilla's core competency. One which they've abandoned.


>depending on memory safe implementations of image and video decode libraries are exactly what I care about in an all-knowing sandbox which touches network services and runs untrusted code on my computer.

What does that have to do with Servo? Firefox has already been doing those things and continues to do them [0], they don't need to do them in Servo first.

We are specifically talking about the utility of rewriting a layout engine from scratch, rather than putting more resources into evolving Gecko - including rewriting small parts of Gecko in Rust incrementally.

>Disagree. You're talking about every Mozilla project that's not Servo. Firefox/Servo development is Mozilla's core competency. One which they've abandoned.

They obviously haven't abandoned it. It's not like they cancelled Gecko development too and are rebasing on top of Blink. Again, this is all just a philosophical debate over whether rewrites or refactors are more effective when it comes to the most core component of the browser.

[0] https://github.com/mozilla/standards-positions/pull/1064


https://4e6.github.io/firefox-lang-stats/

Do you see those red and orange and green pie slices? 40% of the code. There, be memory errors. Approximately 70% of all errors in that code will be memory safety related and exploitable.

Fixing it looks like developing Servo.

Don't want to take my word for it? How about the US Department of Defense: https://media.defense.gov/2022/Nov/10/2003112742/-1/-1/0/CSI...


Mozilla continues to add new Rust to Firefox, despite discontinuing the Servo project. A big parallel rewrite is not the only possible approach to writing Rust. "Fixing it" does not have to look like Servo. In fact doing more incremental rewrites will improve the situation shown in that chart much faster than waiting 10 years for parity before doing the replacement.

I'm not responding further until you actually read and understand what I'm saying instead of flailing at a strawman.


> A big parallel rewrite is not the only possible approach to writing Rust.

A rewrite is the only way to convert the codebase to Rust or any other memory safe language. Whether that happens in parallel, piecemeal, or both at the same time is down to how well you use a version control system and structure your code. As has already been shown by sharing Servo code with Firefox.

A full rewrite is particularly useful with Rust, as the language wants you to structure your code differently than most C/C++ is structured. Doesn't make sense not to have one going if that's the plan. If you're going to rewrite the whole thing anyway, might as well do it in an idiomatic way.


Google has demonstrated that writing new code in a memory safe language still significantly improves the safety story of a codebase, even while keeping around the old code. Full scale rewrites are not the only option.


Yes, every line of C/C++ you can replace with a memory safe language in a critical codebase like a browser improves it's safety story. Which is exactly the reason replacing all of it is so attractive.

But just to offer another point, I also still run into memory leaks and other performance issues in long-lived Firefox processes which, based on my experience with Rust, would be unlikely to be a problem in a functional Servo. It'd be nice to have a browser I don't have to occasionally kill and restart just to keep Youtube working.


Are you suggesting that memory leaks don't happen in Rust? Not only has that proven not to be true but the language for some reason seems to define memory leaks as safe behavior.

> based on my experience with Rust

This suggests that you haven't encountered references cycles at your level of experience:

https://doc.rust-lang.org/book/ch15-06-reference-cycles.html...


> Are you suggesting that memory leaks don't happen in Rust?

Their wording was "would be unlikely", rather than "don't happen". The affine(ish) type system, along with lifetimes, makes it so most objects have a single owner, and that owner always knows it is safe to deallocate.

> for some reason seems to define memory leaks as safe behavior

The reason is that Rust aims to prevent undefined behavior, and that is the only thing it defines as unsafe behavior.

Memory leaks cannot cause a program to, for example, start serving credit cards or personal information to an attacker. Their behavior is well defined (maybe over-complicated thanks to Linux's overcommit, but still well defined).

Rust does not protect against DoS attacks in any way. In fact it seems to enjoy DoSing itself quite a lot given how many things panic in the standard library.


whytevuhuni has done a wonderful job of saying the things I meant, clearer than I am able to articulate them. But I just wanted to point out that this is the first sentence of the documentation you linked:

"Rust’s memory safety guarantees make it difficult, but not impossible, to accidentally create memory that is never cleaned up (known as a memory leak)."

My sentiment exactly. Rust makes it difficult to accidentally create memory leaks. If you try hard to do it, it's definitely possible. But it's tremendously more difficult to accomplish than in C/C++ where it accidentally happens all the time.


You're seeing memory fragmentation which I would expect not to improve


That's an interesting take about a language that puts variables on the stack by default. The less you put in the heap, the less fragmented it gets. Heap fragmentation also does not account for the ever growing memory footprint of a running instance.


You're aware that C++ and C are also stack allocating by default? Unless you mean something different than what I understand.


C requires malloc (and the heap) for anything that lives beyond the scope of the function lifetime. C++ adds smart pointers and copy/move semantics, but default behavior is still like C, and defaults matter.


Does rust really allow stack allocated objects to escape function lifetime? That seems antithetical to how a stack works.


It's the other way around; Rust is really good at tracking the lifetime of objects, and so Rust code is a lot more reckless with passing around pointers to stack-allocated objects through very long chains of functions, iterators, closures, etc, because it becomes obvious when a mistake was made (it becomes a compile error).

This makes it so that things that appear dangerous in C++ are safe in Rust, so for example instead of defensively allocating std::string to store strings (because who knows what might happen with the original string), Rust can just keep using the equivalent of std::string_view until it becomes obvious that it's no longer possible.


This (avoiding needless copies due to uncertainty of what the callee might do, e.g. caching a reference) makes sense but is not what the grandparent was suggesting.


It's exactly what I was talking about. Rust enables me to avoid using the heap when I would be forced to in C/C++. And thanks to the borrow checker ensuring safety in doing so, this extends to libraries and dependent code in ways not easily achievable in C/C++. The net effect is a profound reduction in heap usage by comparison.


> A full rewrite

Things You Should Never Do, Part I [0]

[0]: https://www.joelonsoftware.com/2000/04/06/things-you-should-...


Things Mozilla have done a few times.[0] All with valid reason. 30 years without a rewrite is pretty rare for any piece of software.

0: https://en.wikipedia.org/wiki/Netscape_Navigator


As much as I love servo and rust, you can't compete with Google and Chrome.

Certainly not on performance. On safety you have a chance because bugs happen.


If you think Servo can't compete with Chrome on performance, please watch this talk:

https://www.youtube.com/watch?v=BTURkjYJ_uk


I’m pretty sure if Firefox started beating chrome in speed benchmarks (because of a newer, more modern engine) they would be able to claw back some of their lost market share. Even normal people care about speed.


That would be hard to do with Google intentionally sandbagging things like YouTube (I'm thinking about them using a nonstandard version of web components, plus a super slow shim for Firefox, instead of using the standardized version that Chrome also supported).


True, I guess investing in the future viability of your core product doesn’t fit with how modern corporations are run.

They should just keep launching bookmarking and vpn services that might make money RIGHT NOW.


Does anybody argue that Google is negligent for not doing a complete rewrite of Blink, rather than doing the same incremental software development as everyone else? Did they suffer from their choice to use WebKit in the very beginning rather than do their own thing?


Every time google kills a project they are bashed for it, there is (was?) even a website dedicated to projects killed by Google. And anyway the core of Google isn't chrome, it's Google search


Why would they need a rewrite? Blink is the market leader.

Meanwhile daily driving gecko becomes a worse experience by the hour.


Doesn't matter how much slower gecko is when most of what chrome is doing is loading ads.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: