Hacker News new | past | comments | ask | show | jobs | submit login
Servo passes Acid2 (twitter.com/pcwalton)
264 points by bpierre on March 27, 2014 | hide | past | favorite | 101 comments



Servo [1] is an experimental web engine developed by Mozilla, written in the Rust language [2], which is also developed by Mozilla.

[1] https://github.com/mozilla/servo

[2] https://en.wikipedia.org/wiki/Rust_%28programming_language%2...


Acid2 pass itself maybe uninteresting, but Servo also does layout in parallel (right now, not in the future). No other engines do.

http://pcwalton.github.io/blog/2014/02/25/revamped-parallel-...


Wow, Servo is way longer along than I thought!

From that post:

> No doubt about it, CSS 2.1 is tricky—floats perhaps more than anything else.

I really hope that Servo helps to identify parts of the HTML/CSS specs (if any) that unnecessarily prevent parallelism. By that I mean features (like "float" perhaps) that make it harder to parallelize, and where an alternative design could support the same use cases in a better way.

Those are the gems that teach us deep lessons about the problem space, and how future similar technologies out to be designed.


> I really hope that Servo helps to identify parts of the HTML/CSS specs (if any) that unnecessarily prevent parallelism.

This is beginning to happen; for example Servo work on running <iframe sandboxed> documents in parallel [1] led to the discovery that the HTML spec allowed different-origin iframes to mutate into same-origin, through the "document.domain" setter [2], which made process isolation infeasible (since it would require some way to merge separate processes with separate heaps back into a single process). That discussion led to a change in the spec [3].

[1] https://groups.google.com/forum/#!msg/mozilla.dev.servo/LQ46...

[2] http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2013-Aug...

[3] https://www.w3.org/Bugs/Public/show_bug.cgi?id=23040


Agreed 100%. We want to be part of the conversation as to how to make future CSS parallelizable and amenable to modern hardware, both to educate Web authors as to how to make fast content and to shape the work done in the relevant committees.


I imagine that is quite a huge feat! Here's a questions for the HN gurus though: how much of a browsers code deals with errors, compatibility, and legacy?? I imagine A LOT.


Fortunately, even the backward-compatibility quirks modes have a spec now: http://quirks.spec.whatwg.org/

Just having more and more of this stuff documented (instead of having to reverse-engineer it from existing browsers) makes things a lot easier than they once were.


Arguably a large part of the interest in Servo is whether the specs are good enough that developing a new browser from scratch is now feasible — certainly, yes, there is a lot of institutional knowledge around Mozilla, but a from-scratch implementation isn't going to exactly match Gecko everywhere.


CSS without float.. would not work very well! Unless we all go back to table based design (or ahead to flexbox I guess, I don't know enough, practically speaking, about that standard yet to reason proper about it though!)


If you clear your floats, you'll be fine in terms of giving the engine room to parallelize, according to pcwalton's blog post.


Does that 'allow' the classic use case of floats - potentially, what should be the only use of floats - wrapping text around an image? I feel like the answer would be "no", yet there's no good alternative for achieving this common requirement, which would cause a snag.


If you know you need to use floats to flow text around elements, feel free to use them! They're in CSS for a reason :) But if there are alternatives (such as absolute positioning) that work equally well in your particular design, then choosing an alternative may help you gain better performance.


I believe you're right. But I don't think this is a huge problem: you simply don't get the benefits of layout out that specific text in parallel, it'll render sequentially as browsers tend to do today anyways. You can still get parallelization in other flows outside of the container of the floating elements.


I believe it does, yes. The point is that when you clear them later, the browser has a static guarantee that that float will not matter anymore. So the complexity due to the float is only relevant until the clear, and it can parallelize before and after the clear.


Wired.com made (web) history by being one of the first major sites to deploy a pure-CSS design, which was based on floats because that's all that worked back in the days of Netscape 4 and IE 4. The web design community now primarily uses floats for multi-column design because of its historical browser support. However if you don't need to support IE7 and lower, floats are one tool among many for multi-col layout, and not necessarily the best. Flexbox will only add another tool to that toolbox.


Flexboxes don't really help. The reason that floats exist is so that elements can flow around them.


That's what they were designed for, but they are frequently used to do things that would better be done with flexboxes (if the support was there).


Does the lack of a multi-threaded layout engine really matter? Don't pages load fast enough already? And if layout performance really is becoming a problem, particularly in cases of dynamic DOM manipulation etc, isn't that largely the fault of the spec? Even shiny HTML5 is hardly designed for fiendishly complex UI layouts... it's an abomination for programmers and designers alike, and always has been. To this day it still requires herculean effort to achieve things that are trivial with real UI toolkits or publishing tools.

I guess now we're in this hole we best keep digging


I think that it's clear that any technology that helps the browser better compete with native applications in terms of performance is worthwhile.

Regarding new specs, we're of course working on that too, with specifications like flex box and grid layout. Part of Servo's goal is to steer the conversation toward what can be done to make future CSS specs parallelizable on CPUs and GPUs. But of course we want to be fast on existing Web content as well.


Native applications do layout and engines such as Qt are single threaded.


Why is that a reason to remain single-threaded? Especially when most new hardware is multi-core.


Consider the parallel addition task, it takes N*log(N) instead of single threaded addition which takes N. Which is more energy efficient?

If takes 10x the power to do layout in 20ms vs 30ms is it still a good deal?


10x is a silly exaggeration. As for your general point, you'd be surprised: often consuming more power per second is much better for power efficiency than consuming less power if that power consumption lets you complete the task faster. That is because the wins of getting back to the CPU's low-power idle state quicker dominate everything else. It is almost always better to use the power available to you.


HTML layout is not super-fast on low-end phones, and due to marketing stupidity those phones have 4-8 slow cores instead of 1-2 fast cores. I imagine Servo could do well there.

Drastically changing the HTML standard would take even longer than developing Servo.


> due to marketing stupidity those phones have 4-8 slow cores instead of 1-2 fast cores.

Beyond marketing, wouldn't more slow cores consume less battery power than fewer fast cores?


Yes, this is also part of Servo's strategy apparently: spread work across multiple cores, but each core does less and can be more efficient in terms of power draw. Faster + less battery use in one fell swoop. Though it's not really clear to me why this actually results in an efficiency win over maxing out a single core.


The way I've heard it, as a very imprecise rule of thumb, increasing clock speed scales power consumption quadratically, while adding CPUs increases power consumption linearly. So, very roughly speaking, one 1.6 GHz CPU uses twice as much power as two 800 MHz CPUs.


The equation for CMOS switching power (as opposed to leakage power) is

capacitance * voltage^2 * frequency

but the voltage limits the clock frequency. The exact scaling of maximum clock frequency with voltage depends on the circuit and the process, but from taking a glance at the voltage tables it looks like the main cores in current Snapdragons are generally running in the 0.8-1.2v range across their entire frequency range.


Most of the power differences between cores these days isn't in terms of clock speeds but in terms of extra structures that cause more instruction to be executed each cycle. The rule of thumb involving voltage/speed scaling is that your power use is indeed the square of your performance more or less when you're in a reasonable region. However, a relatively simple in-order core like an ARM A7 might also only take 1/4 the energy to execute a given instruction of a complex out-or-order core like an ARM A15 even when both are clocked at 1 GHz on the same process.


> Though it's not really clear to me why this actually results in an efficiency win over maxing out a single core.

i heard the whole multiple core thing is spurred in part by manufacturers trying to increase fab yield: if one of the cores on a multi-core chip is bad, it can still be sold, albeit at a lower price, rather than thrown out.


If apps were parallel, yes. But most of them aren't.


If Servo works, most* of the Web will be parallel.

* This is probably too optimistic.


I'd like to see a memory usage comparison with Servo, since mobile devices have far less memory than desktops.

On the other hand, I believe that HTML layout can be done, using only a single thread, much faster and with far less memory than current mainstream browsers do, by greatly simplifying the code (i.e. remove excessive use of abstraction, using different data structures, etc.) Not exactly the same, but along the same lines as, this related item that appeared here a few days ago: https://news.ycombinator.com/item?id=7457674


You'd be surprised: it's not that easy to beat existing browser engines. The IE trick you mention is pretty trivial. That said, it is possible to win, and one of the great benefits of Servo is that its data structures are simpler than other browser engines' in many ways. But it's not as easy as you think.


> Don't pages load fast enough already?

Ho ho ho... good one.


Try drawing lager pages. That is the bottleneck in one of my applications.


why all the effort to build the flow tree? can't any block be rendered in parallel to a buffer and then copied into the correct location once all the widths and heights are known? in the examples given it seems like all blocks including the green one could be rendered in parallel to separate buffers.


Layout is precisely about learning all the widths and heights. No rendering whatsoever happens in layout. And as you pointed out, rendering can be parallelized rather easily.


Oddly enough I just checked and Safari is not passing the Acid2 right now. Happens on my iPhone too.

http://www.webstandards.org/files/acid2/test.html#top


Acid2 was designed before high-density screens were common; it has known bugs when one CSS px is not exactly one device pixel:

https://bugzilla.mozilla.org/show_bug.cgi?id=580920


Actually, on my Mavericks box, Safari 7.0.2 seems to pass Acid2 just fine; the nose even changes color on hover. However, Chrome 35.0.1912.2 pretty obviously fails with the top of the head being messed up and a red line over the forhead. The nose changes color, so that's good. :)


To quote: "Servo passes Acid2 in my branch". It is probably not in default branch yet.


To quote: "Safari is not passing".


Are there any sites with runnable Servo binaries? Or is it seriously just the engine with absolutely zero browser chrome?


We have no browser chrome, for two reasons:

(1) Browser UX research isn't part of the research agenda for Servo at this time. (That isn't to say we aren't interested in browser UX research at Mozilla, just that that isn't being done under the Servo project.)

(2) It helps to ensure that we are embeddable. We want Servo to be an embeddable Web rendering engine for people to use in their own projects.


embeddable. Wow. This was what Gecko was hoping to achieve and failed to do so. Lets hope the second time around everything will be much much better.


There's also a chance that they'll adopt the Webkit embedding API, which would make Servo a drop-in replacement for any software that currently uses Webkit. Though it hasn't been decided for certain whether to pursue this route.


Whilst it may not be as easy as you'd like Firefox definitely is embeddable. My company use it in our desktop photo application.


Servo's progress is impressive. How far is it off being ready to integrate in to a real browser? Is this likely in the foreseeable future?


Mozilla keeps saying it is experimental and not intended to replace Firefox, but it sure seems like they're putting a lot of effort into it and Rust...

That said, I don't think it could be ready for years if ever.


> That smiley face looks like the face of someone that just passed an acid test. -- ‏@alekslitynski

I remember the mess I saw on my PSP when I tried it out when Acid 2 was important.


For the confused: @alekslitynski is making a reference to https://en.wikipedia.org/wiki/Acid_Tests.


Can't wait until I can get a browser that uses this, especially on Android.


Posts like this always make me want to play with a layout engine trying to implement various features. Is there a toy layout engine made specifically for (self-) education? I couldn't find one. If there isn't, what is otherwise the easiest one to get started with?


Or contribute to Servo! Despite the huge milestone represented by Acid 2, it's still pretty early days and there is plenty of work to go around. The code's on GitHub [1], Josh Matthews did a nice talk about Fosdem that should give you a start on the architecture and how to contribute (video [2], slides [3]), and there is an irc channel (#servo on irc.mozilla.org) where you can ask questions.

[1] https://github.com/mozilla/servo/ [2] http://ftp.osuosl.org/pub/fosdem//2014/UD2218A/Saturday/Serv... [3] http://www.joshmatthews.net/fosdemservo/


Servo is also pretty easy to navigate, IMO. I'm biased, of course :) But we do care a lot about code cleanliness.


WeasyPrint is a layout engine written in Python. It is not written for education nor it is a toy, but I found it easier to understand compared to other engines.

One of Servo authors worked on WeasyPrint.

WeasyPrint passes Acid2.

http://weasyprint.org/


How about writing your own?

If you want an existing one, in a simpler browser, try Dillo or NetSurf.


The HTML 5 spec can be implemented from scratch, but my guess is it takes a dozen times more people than implementing a C++ compiler. My basis for this comment is the now-four-years-dormant project I worked on to implement just parsing HTML5 into a DOM tree, which wasn't even inspired by wanting to write my own browser. I was motivated more by having an interest in hacking text layout code, and the need for some sort of syntax for my test cases. Since the project had no commercial goal, it went down a tangent.

I'm impressed that a non-profit has funded two of these projects.


But is it any faster at doing so than any of the competition?

Specifically, are there any real benefits to the "parallel" aspect of this?


I haven't measured Acid2 layout performance, because it's not particularly interesting; Acid2's CSS is nowhere near the CSS that someone would actually write in the real world. On real pages, from our small amount of testing, we've seen promising results.


I'll have to give it another try. Last build I ran couldn't load that much.

Of course, firefox is already bloody fast on any site I can think of. Not sure what sorts of improvements I should be looking for.


Oh, Servo certainly isn't a production-ready browser engine yet, if that's what you're looking for; the incomplete DOM code and network code prevent many sites from working.


More than just being production ready. I'm wondering what lessons have been learned that will actually help current browsers. Seems that servo is in a massive game of catch up, with no guarantees that things will be faster/better/whatever.


If you're looking for guarantees, you may not understand how research works :)

When it comes to areas in which Servo is ahead of current browsers, I can name many: off-main-thread layout, parallel layout, off-main-thread iframes (not out of process to avoid scaling issues), a fully garbage-collected DOM without cycle collection/reference counting or stop-all-threads GC, and, most of all, being written in a memory-safe language. These are all areas in which other browser engines would need to catch up to Servo--though it's unclear how to do that without a complete rewrite, especially for that last one.


I'm thinking there will at least be lessons learned. That lesson may be that going parallel offers no benefit. Which sucks, in that we are hoping otherwise, but it is a possibility. Right?

That is, I am more asking as to what lessons have been learned. Not demanding that we know what progress was made. Since, as you point out, we may not have made any.

Which is to say, I should throw up a huge "I'm not trying to dissuade any of this effort." If I have been too negative in my comments here, I humbly apologize!


Sure, no problem. Here is one:

http://pcwalton.github.io/blog/2014/02/25/revamped-parallel-...

I'm hoping to make more blog posts.


Awesome, thanks! This is pretty much exactly in line with what I was looking for. Looking forward to reading more of these.


I remember the same things being said about Mozilla/Gecko back in '99. That seemed to turn out pretty well for everyone.


Said by plenty of folks - for example, Joel Spolsky http://www.joelonsoftware.com/articles/fog0000000069.html


"I remember the same things being said about Mozilla/Gecko back in '99."

Except nobody ever said that back in 99.


Interesting. how does one go about proving a negative?

To support my side, do a Google search for "Joel spolsky Mozilla". Hope that helps. Granted, it was written four months into 2000, but was reflecting murmurings one would read on Slashdot months earlier (back when /. was the HN of its time).


I'm not sure what this proves, though. It didn't exactly work out well for Netscape.

To be fair, the lesson learned there was to not bet your company on a rewrite. Which they are not doing.


I'd say that the first thing it proves is you don't remember the tech scene in '98. All the comments you brought up in your GP post were reflected loudly by most pundits during that time. Cringley, spolsky, and many others said AOL was doing a fool's errand by allowing the then-unproven open source bazaar model to rewrite the browser. And that they'd be playing catch-up. And that IE would win because nobody would care about the rewrite. Fast forward to 2014, and blink/Firefox fight for first place, while IE continues to lose market share. Mozilla not only caught up, but is now setting the pace.

The second thing it shows is that history repeats itself. The pundits, much like you will prove to be, were wrong. Thanks to the brilliance of jwz, the Mozilla project and the gecko rewrite has outlived both Netscape and AOL. Firefox is a flagship example of how open development can create a superior product that can outlast the companies that make it.

The next thing this proves is that open development is continuing to show that rewriting an engine doesn't require "betting the company" anymore. Mozilla and Samsung are both heavily-invested in Servo, and are expecting this rewrite to be at the core of your future operating system. But if servo doesn't pan out, Mozilla won't be filing for chapter 11 protection


I'm not sure what I'm reading here. First thing to remember, is that Netscape 4 was pretty rushed and terrible. To the point that I remember thinking of how awesome IE was at a few points. And I already had a large distrust of MS.

That is, MS used some underhanded tactics to gain market share. They also took advantage of (and probably forced) a major misstep by a competitor.

That Phoenix/Firefox was able to be resurrected from the ashes is a fortunate occurrence, but at no time did that at all come of as if it was planned. I pretty much consider it the "classic coke" of the browser wars. (Remember, phoenix was originally Mozilla's suite, stripped down to just the browser.)

So, yes, it has gone rather well. However, I'm not sure the codebase can afford to survive the death of its stewardship again. The statistics show a clear dominance of "not Mozilla" historically.

To the point that I'm not all sure on what you are basing your claim of Mozilla "now setting the pace." Don't get me wrong, I'm glad it is doing well. It is my browser of choice. However, I realize I am the minority both in my friends/family and statistics.

Which is funny. In my family, the browser choice is either Safari or IE. Depending on OS of choice. In my friends, it is Chrome. To the point that I'm not even clear what lessons are to be learned from these choices, honestly.

So, back to the point. How were the pundits wrong? Did AOL/Netscape somehow come off well by the rewrite? Was it a sound investment? If anything, I would think the continued active development of the non-servo codebase shows that it is sound advice not to bet the company on a rewrite, and that they learned it. Are you really claiming otherwise?


> Of course, firefox is already bloody fast on any site I can think of.

That's because all the sites we can think of are designed to be fast in Firefox (and other current browsers). Once super-fast browsers become standard, we can start designing sites that would be unusable on today's Firefox :)


I can only hope I am not imagining the sarcasm that goes with that idea.


Semi-relatedly, because Rust is a memory-safe language, there are whole classes of bugs (buffer overflow, use-after-free, etc.) that are impossible, which is great for Servo's security.


That actually makes a ton of sense. I think this should be trumped up more than the parallel abilities.


Looks better than current Firefox on my system. http://i.imgur.com/6CN4Heh.png


Ouch! Acid2 is broken on my Firefox too (but in a slightly different way). :(


No problems on FF 31. http://i.imgur.com/5l2042M.png


looks exactly the same on firefox for me


Am I the only one who thought Acid2 was something to do with a new database data integrity test?


me too


Good,now maybe you could fix that broken IndexedDB spec and come up with something more usefull...

and implement basic stuff like Summary/Details or you know some HTML5 form controls...

And some file system api, seems to me Mozilla dont care about offline webapps,since the WebSQL debacle...


seems to me Mozilla dont care about offline webapps

Did you get that impression from the fact that they're building a whole OS based on offline webapps?


when did that os support websql? tell me

oh wait,it does not,because of this crappy indexedDB Moz pushed.


Not supporting your pet feature != not supporting offline webapps. WebSQL is not even a standard.

Besides, if you want to use SQLite in the browser, you don't need WebSQL: https://github.com/kripken/sql.js


    >Not supporting your pet feature !
See, your arrogance will be your doom,i'm sure devs that take HTML seriously will appreciate your contempt and disdain.

    >WebSQL is not even a standard.
Because Moz folks did not support it. And you want me to use a (synchronous) javascript when browsers already cheap with sqlite?

Did you even test sql.js ? No ,that's not even production ready


As far as I know, websql is deprecated. No work has been done on the standard since 2010. Firefox, Chrome and IE supports IndexedDB. There exists a shim that uses websql or indexeddb, depending on what's available.


Did you even try that shim? it's broken,and i'm not interested in using a slow nosql db where i'd have to do manual join or write javascript for queries to persist my complex datas in my offline apps


Which one? There are several. Then don't target Firefox OS. It's not like Android or iOS is going to drop support for webSQL anytime soon. Me personally, I've written an orm around localstorage, work's everywhere :)


    offline webapps
Paging Dr. Oxymoron



I'm going to have to apologise and eat humble pie because, unbeknownst to me (and found through your link), the HTML standard actually has a section called "Offline Web applications" [1].

[1] http://www.whatwg.org/specs/web-apps/current-work/multipage/...


I think he means like Chrome OS or Firefox OS type apps, where its just locally hosted html+js in some cases.


Offline and Web have always been very funny to me. Why should it be the Web's responsibility to make offline better, it is the Web after all -- the epitome of online connectivity. If anything, it should be the mobile companies responsibility to have better offline support. Maybe they could start by not choosing to arbitrarily wipe Web browser data as the first step of reclaiming memory for the operating system (I'm looking at you, Apple)?


"Why should it be the Web's responsibility to make offline better"

Because you care about your users being able to use the site?

Because modern users aren't tethered to the wall any more?

"Maybe they could start by not choosing to arbitrarily wipe Web browser data as the first step of reclaiming memory for the operating system"

Okay, now I'm curious.

Assume the system is totally out of storage. Something has to go. Should it be:

1) Apple's answer (i.e., deleting cached web pages and data) 2) Your answer (which would be?)


I care about webapps being able to survive temporal offline situations and local caching. And it is one of my bigger headaches.


Offline webapps is an easy way to distribute apps. No installations steps, easy updates, probably even easier than an app store.

Also add very easy deploy and hosting of your app.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: