Hacker News new | past | comments | ask | show | jobs | submit login
Servo Layout Engine: Parallelizing the Browser [video] (paulrouget.com)
206 points by paulrouget on Feb 14, 2014 | hide | past | favorite | 59 comments



Mozilla is one of my favorite tech companies. Servo is a great example: Mozilla is willing to engage in fundamental CS research. Not only are they trying to put together a parallel, secure browser engine from the ground up, but they even created Rust to do so. This is truly long-term work, which seems rare in an increasingly short-term world.

And Rust isn't just another C clone with OOP or CSP bolted on: it's principled, relatively elegant and takes full advantage of the last few decades of PL research. All while being practical—it has to be, since it's evolved with a Servo as a concomitant project. A non-trivial companion project like that seems great for naturally guiding a language! Not many other languages can say any of this, much less ones actually poised to replace C++ or at least do actual systems programming.

And Mozilla is doing all this in a completely open and transparent way. I think this is incredibly important: anybody can get a glimpse into active development or even contribute. Just go to the relevant GitHub repo[1][2] and you're set. This is the way open source is supposed to work, rather than having companies develop behind close doors and dump source code occasionally (although that's also better than nothing).

I really wish more companies would take this sort of approach with their open source or basic research work. This gives me more confidence in Servo, Rust and Mozilla as a whole, especially compared to many of Mozilla's competitors (both in the browser space and in programming languages).

[1]: https://github.com/mozilla/servo/ [2]: https://github.com/mozilla/rust


> A non-trivial companion project like that seems great for naturally guiding a language!

It's definitely an interesting aspect. For most languages, the only substantial project during their early life is their own compiler. There is an intriguing theory (posted on HN a few weeks back?) that this results in languages that are optimised for writing compilers, at the expense of writing other things.

Are there other examples of languages that had this kind of companion project in their early life? Did UNIX do this for C? To what extent did Rails do it for Ruby? Is there something inside JetBrains that does it for Kotlin, or inside JBoss that does it for Ceylon?


The closest parallel I can come up with is Erlang, which (I believe) was developed by Ericsson specifically to handle their exchange traffic.

http://en.wikipedia.org/wiki/Erlang_(programming_language)


One that comes to mind is ML, which was originally developed as a domain-specific language for writing proof tactics in the LCF theorem prover, and was only later broken out into a standalone programming language.

I can't think of one canonical companion project for them, but early Lisp, COBOL, and FORTRAN were also pretty driven by external application concerns, in AI, business logic, and numerical simulation, respectively.


The original McCarthy LISP started even more abstract than ML: it was intended as a logical thought exercise, and was only accidentally a programming language at all.

COBOL (and Ada) were standards-first languages. The industry-driven standards committees designed the languages first, and it was not until afterwards that there were working implementations, much less working programs. Industry experience guided them in later releases, but v1.0 (which Rust is still working towards) were even more ivory-tower than languages whose only program is their own compiler.


I'm pretty sure that Ruby was out 10 years before Rails.


Totally - the more interesting question is how much Rails has driven all the improvements (mostly to performance, but also to the standard library and packaging) since roundabout version 1.8.


I think ruby-core has had a big focus on keeping Rails needs in mind but not letting Rails drive the building of the language.


True. I hadn't thought of that, but it would be interesting to know.


Do you have any examples of this?


Some "examples" (they are really just weak speculation) for how I think Rails may have contributed to the three specific areas I mentioned:

- Packaging: This is probably where I think Rails contributed most - both rubygems.org and Bundler were created to make it easier to manage the creation and use of lots of large numbers of dependencies, which Rails projects tend to have. Also rvm and its descendants were created to handle Rails app deployments, and are especially useful for situations where you're switching between lots of projects with different ruby version and library dependencies, which is typical in Rails client work.

- Standard library: The `Object#tap` method came from ActiveSupport. Also, ActiveSupport had an OrderedHash, and now all hashes are ordered. I think there are some other examples of that sort of thing.

- Performance: I really don't know what has driven performance improvements have been made, but it doesn't seem outlandish to think that backlash against poor CPU and (especially) memory usage characteristics of Rails applications was a contributor.


I think you're taking a very Railscentrc view of Ruby.

rubygems the tool existed before Rails, as did multiple gem hosting mirrors. I believe rubygems.org sprang up because rubyforge.org was falling out of favor. And Bundler is not part of Ruby.

It's possible Wayne Seguin created rvm due Rails but I never got that impression from his talks (but it's been some time).

Pretty sure MenTaLguY created "tap"; Rails had "returning" (though they are similar and perhaps the usage in Rails made it popular enough to warrant inclusion in core).

http://moonbase.rydia.net/mental/blog/programming/eavesdropp...

Certainly if any library gets a lot of use then that will influence aspects of the base language, but matz and ko1 seem pretty deliberate about what gets added.


Heh, I just thought it was an interesting question to speculate about - I really have no idea to what extent Rails has driven Ruby recently.

You're probably right about `.tap`, and I can't think of any other standard library stuff, so my speculation there is seeming pretty weak.

I didn't suggest that Rails invented rubygems and gem hosts, I suggested that rubygems.org and bundler were major improvements to the ruby packaging ecosystem that were driven by Rails. I'm not sure you actually disagreed with that point...


I suggested that rubygems.org and bundler were major improvements to the ruby packaging ecosystem that were driven by Rails. I'm not sure you actually disagreed with that point...

Rubygems.org was not driven by Rails. It doesn't offer much beyond what rubyforge was doing. Less, actually. RF was for code hosting, docs, and gem serving. Once people started moving to github RF lost favor. When Github decided to stop serving gems rubygems.org came about to fill the gap. It lists and serves gesm; code and docs live elsewhere.

Bundler, sure, came form the Rails community to apparently solve a problem for that community. It's not for everyone, though.

I don't use it (doesn't make life any better for me), but then I don't use Rails either.

There's lots that goes in Ruby that has nothing to do with Rails and the conflation of Rails and Ruby is a detriment to different Ruby communities out there.


I still don't think that you actually disagree that the github/rubygems.org setup is an improvement over the rubyforge setup. Assuming that, I agree that the extent to which Rails had anything to do with that is debatable (and unanswerable, in fact); there was definitely an explosion of gems hosted on github that needed a new home, and I think the popularity of Rails had something to do with that explosion, but it certainly may have happened anyway.

I think lots of people find Bundler to be an improvement over straight rubygems outside of Rails projects (even if you don't), which I think fits the criteria of "an improvement driven by Rails".

I think your last sentence is the real argument you're trying to make here, and I agree with it wholeheartedly - it's incredibly frustrating that so many people seem to think Ruby is only good or useful because of Rails. I'm not one of those people!


I believe HN was created with arc[1] to put design pressure on it.

[1] http://en.wikipedia.org/wiki/Arc_(programming_language)


Note that Rust was a personal project of Graydon's before it became a Mozilla thing - they picked it up a few years.


The Mozilla foundation is a nonprofit, not a company -- which is probably why it's doing all of these great things that profit seeking companies are not.


This is an amazing and challenging project to work on. If you'd like to join us, we hang out in #servo on irc.mozilla.org or just dive right into the code[1]. You don't need previous browser hacking experience, and we're happy to mentor you through a bug.

We're opening three full-time positions on the Servo team at Mozilla Research within the next couple of days; they should be up on the careers[2] page soon.

Also, if you're a graduate student, Mozilla Research is still looking for summer interns for Servo, Daala, Shumway, and other projects. Those positions are also on the careers page.

[1]: https://github.com/mozilla/servo/ [2]: https://careers.mozilla.org/en-US/


Mozilla is a place I would love to work at, beyond any other tech company. A shame I live in Australia :(


Mozilla is _incredibly_ remote-friendly. I believe that less than 40% of our paid staff is in the traditional "US West Coast" timezone.

For example, on Servo we have about a third of the team in the SF area, but then I'm in Chicago and we have other staff in New Mexico, Toronto, and London. Our largest group of partners is in Korea, and we have several other regular contributors scattered across pretty much every timezone. The big engineering projects, such as FFOS and Gecko are even more distributed.


Living in Australia shouldn't stop you from applying. I think Mozilla has people working remotely in Australia.


True. There are a few each in Australia and New Zealand, including some very senior contributors working in core engineering roles (i.e., it's not just "marketing for SE Asia").


What's the relation between Servo and the Quark browser kernel mentioned on the Servo page?

Quark is a formally verified kernel made of a few hundreds of lines of (C ? C++ ?) code. It's been verified using Coq and hence I take it it's guaranteed from a whole class of bugs typically leading to security exploits (buffer overrun/overflow/underrun, dangling pointer, null pointer, ...).

Is Servo using Quark? If not, is Servo formally verified using Coq?

To me formally verified software are one of the most interesting development we're seeing (that and deterministic builds seems to be huge steps forward towards more security), so I'd like to know more...

(gone building Servo on my Debian box)


Servo is not using Quark, and is not formally verified. I assume the link on the Servo page is along the lines of pointing out another cool project in a similar space of writing safer browsers.

Formal verification is cool, but speaking as somebody who worked out it for about 7 years, it is not really ready to be used in large software. Most people care much more about performance and features for their browser. Hopefully by writing a browser in Rust instead of C++, the result will be much more secure, but not have to make many compromises.


That said, we would like to at least formally verify that the core subset of Rust's type system is sound. That has important practical implications for Servo: it means that any (potentially exploitable) memory safety problems in the safe part of Servo will be of the "straightforward compiler bug" variety and not the "oh no, now we have to redesign the type system and break everyone's code" variety.


What is the current state of formal verification in Rust (of the language, of compilers, of other programs)? I would assume that since parts of the language are still undergoing change there isn't much work towards super-rigorous proofs until that gets nailed down. Has there been any provisional work in this direction, and is there any work towards making formal methods practical for Rust?

I like the idea of formal methods, but my familiarity with the subject is entry-level at best, so I don't even know what sort of details I should be asking about here =P.


Niko Matsakis, Rust's type theory guru, has been working on a formal model of Rust's type system to prove its soundness. Not sure how much of it is published yet, but I did find this: https://github.com/nikomatsakis/rust-redex


Few languages have any formal verification work at all. There is a verified C compiler (CompCert; it comes with a guarantee that the compiled assembly is a correct compilation of the source, and it does some very basic verified optimizations) and a verified ML compiler (CakeML, see this year's POPL proceedings).

Few languages have verified anything—verification is still very challenging to do.


A proof of type safety of a language, which I think is what pcwalton is talking about, is a different beast than the proof of the correctness of a compiler, which is what you are talking about.


I actually cannot think of any languages except SML that have complete formal proofs of type safety. Now, I may simply be forgetting something obvious. But Haskell's type systems keeps growing by little features here and there, so no full proof is anywhere; OCaml doesn't have a formal semantics, so it sure as hell doesn't have a proof of type safety (in the usual "a well-typed program doesn't go wrong" sense); Java is unsound (in a particular manner of speaking, of course); C# seems too complicated to have such a proof. The Go authors don't seem the type. Rust is the language in question. Those seem like the big players.



Yes, we've been both following and collaborating with the rest of the people who do research in this space. In particular, Ras Bodik's group (first link) has been partially sponsored by Mozilla Research for several years. We've also leaned heavily on members of these groups as interns in the past, and they've written large portions of our parallelism-friendly layout code, etc.

Some additional interesting links to both publications and talks are available on our wiki below (though it's not comprehensive):

https://github.com/mozilla/servo/wiki/General-implementation...


Am I the only one that hopes this has a companion technology called "Crooooow!" ;)


This doesn't deserve the downvotes, because "Crow" is in fact the name of Servo's proof-of-concept UI layer :P Not everyone spots the MST3K reference!

https://github.com/mozilla/servo/issues/111


If you visit #servo on irc.mozilla.org, there is a "crowbot" that will dig up the github link if you reference a pr/issue number and will correct you if you link to an outdated web platform spec :-)


For those having trouble seeing the slides in the video, you can find them here: http://www.joshmatthews.net/fosdemservo/


Since Rust was born out of a vision to build the next-gen browser engine, does anyone know why Rust does not have bindings to GTK ? The only two projects (on github) are 1-2 years old.

I was hoping Rust could overtake Vala as the goto language for desktop software in the GTK world.


In my experience, Rust is currently in a phase where the correct answer to "I'd love to use rust for xyz, why doesn't it have support for that?" is typically "that's a good idea, you should add that!". That's either exciting or frustrating depending on your point of view. Unfortunately, the next logical question of "ok, how do I make and distribute a library?" currently doesn't have a satisfying answer, as they've just scrapped the package manager. It sounds like they want to hire somebody to work full-time on a replacement, so hopefully that is a temporary problem.


Servo is focusing on the layout engine. Think of it as an alternative to Gecko or Webkit, not Firefox or Chrome


It's highly likely that will happen anytime soon.

Jürg Billeter created Vala for the sole purpose of building GTK / GNOME applications and the two projects are tightly coupled.

It's not to say that Rust won't have GTK bindings in the future, but it's not a high priority for either Mozilla or GNOME devs and unlikely to displace Vala.


is that so ?

I would have thought that considering they are building a browser that was originally written in C-GTK, the first order of business would be to build GTK bindings.

I would say Vala and Rust do have fairly similar goals in general.

P.S. I know right now Servo != Firefox, but I suppose it will get there eventually.


Servo isn't even a project to build a browser. It's just a project to build the rendering engine. I'd guess that none of the interesting work they're focusing on overlaps with any of the user-interface chrome where GTK would become relevant.


>the two projects are tightly coupled

Vala is only coupled with GLib for the object system and even that can be replaced by using another profile: https://wiki.gnome.org/Projects/Vala/Tutorial#Profiles


If this is ever implemented into Firefox properly, will see the difference in speed or is this mainly focused on security?


So, talking about Servo being "implemented into Firefox" might set the wrong expectations. At the moment it is very unclear if or how it will be turned into a consumer-facing product.

That said, the goal of Servo is to improve both speed and safety.

In terms of speed, part of the project is research into parallel algorithms for various parts of the web stack. For example Servo today has parallel implementations of various parts of CSS. The goal here is to make the sequential performance on par with the best implementations today and then get a further speedup by using multiple cores efficiently.

In terms of safety, the choice of Rust over C++ provides substantially stronger compiler-enforced guarantees of memory safety that should help eliminate a large class of bugs that have caused numerous security issues in current browsers.

If this sounds interesting there are plenty of ways to get involved; come chat on #servo on Mozilla IRC.


It's a rewrite of the engine from scratch, focused on parallel computing. So yes, it's going to be a massive improvement of speed, security and overall revamping.


Responsiveness (commonly defined as time from start of operation to completion) might improve from parallelization. As for "speed" (commonly defined as operations/sec on a given hardware), it will likely be the same or a little lower (parallelism creates overheard).

But it's responsiveness that most people measure, and they measure it on machines with CPU utilization in the single digits - in which case, it is likely to improve considerably.


Will Servo take advantage of GPU compute, too?


We currently have GPU compositing and rendering[1]. Other browsers will get this eventually and some have it already.

We plan to explore this space quite a bit.

The other thing we're starting to think about is using SIMD ops for layout, which isn't a GPU but falls into a similar "taking advantage of modern hardware" bucket.

[1] Note that 2D rendering on the GPU is not currently a clear win. In theory you save a texture upload and get massively parallel drawing operations, but in practice there's a lot of overhead to deal with.


We're certainly planning to investigate it, particularly with the advent of vector units that reduce the latency of data transfers, such as AMD's new Opterons and Intel's Knight's Landing. The challenge here is that while several of the stages (e.g. CSS selector matching) can be trivially sped up on a GPU, the CPU/GPU latency cost is going to be close to the original CPU evaluation time.

There are some very sequential and unfortunately common "corner cases" in layout (e.g., uncleared floats) that have lead us to currently prefer the higher clock-speed CPU for parallelizing phases. Even if we find a great way to work around floats, it's likely there will still be a lot of CPU/GPU chatter, which makes it difficult to use today's GPU cards when you're trying to keep total page load well under 200ms and incremental reflows < 15ms.


> it's likely there will still be a lot of CPU/GPU chatter

At least on today's consumer cards (admittedly I haven't tried anything really high-end), one issue I've had with this is that it gets even worse (by far) when more than one program is trying to use the GPU. If you're editing photos in Lightroom while alt-tabbing to a tutorial in the browser, and everything is trying to GPU-accelerate its operations, contention goes way up and things start blocking on GPU contexts and data transfers.


"the CPU/GPU latency cost is going to be close to the original CPU evaluation time."

  Have you looked into HSA architecture that helps to remove this latency?  I think this is the direction Intel will move to in a few years.


> Have you looked into HSA architecture that helps to remove this latency? I think this is the direction Intel will move to in a few years.

We are actively looking into this.


I believe that Samsung added some preliminary SIMD support to Rust in order to explore this possibility.


Why doesn't that video expand to full screen? So annoying. I can't see what's on the slides.


Works with Firefox. Not sure why Chrome doesn't resize the video.


It's because it sets max-width: 600px on the <video>.

Blink currently doesn't force fullscreen elements to width/height 100%/100%.


Works for me, both via right-click menu and expand icon in bottom right corner. (Firefox on Ubuntu).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: