> No doubt about it, CSS 2.1 is tricky—floats perhaps more than anything else.
I really hope that Servo helps to identify parts of the HTML/CSS specs (if any) that unnecessarily prevent parallelism. By that I mean features (like "float" perhaps) that make it harder to parallelize, and where an alternative design could support the same use cases in a better way.
Those are the gems that teach us deep lessons about the problem space, and how future similar technologies out to be designed.
> I really hope that Servo helps to identify parts of the HTML/CSS specs (if any) that unnecessarily prevent parallelism.
This is beginning to happen; for example Servo work on running <iframe sandboxed> documents in parallel [1] led to the discovery that the HTML spec allowed different-origin iframes to mutate into same-origin, through the "document.domain" setter [2], which made process isolation infeasible (since it would require some way to merge separate processes with separate heaps back into a single process). That discussion led to a change in the spec [3].
Agreed 100%. We want to be part of the conversation as to how to make future CSS parallelizable and amenable to modern hardware, both to educate Web authors as to how to make fast content and to shape the work done in the relevant committees.
I imagine that is quite a huge feat! Here's a questions for the HN gurus though: how much of a browsers code deals with errors, compatibility, and legacy?? I imagine A LOT.
Just having more and more of this stuff documented (instead of having to reverse-engineer it from existing browsers) makes things a lot easier than they once were.
Arguably a large part of the interest in Servo is whether the specs are good enough that developing a new browser from scratch is now feasible — certainly, yes, there is a lot of institutional knowledge around Mozilla, but a from-scratch implementation isn't going to exactly match Gecko everywhere.
CSS without float.. would not work very well! Unless we all go back to table based design (or ahead to flexbox I guess, I don't know enough, practically speaking, about that standard yet to reason proper about it though!)
Does that 'allow' the classic use case of floats - potentially, what should be the only use of floats - wrapping text around an image? I feel like the answer would be "no", yet there's no good alternative for achieving this common requirement, which would cause a snag.
If you know you need to use floats to flow text around elements, feel free to use them! They're in CSS for a reason :) But if there are alternatives (such as absolute positioning) that work equally well in your particular design, then choosing an alternative may help you gain better performance.
I believe you're right. But I don't think this is a huge problem: you simply don't get the benefits of layout out that specific text in parallel, it'll render sequentially as browsers tend to do today anyways. You can still get parallelization in other flows outside of the container of the floating elements.
I believe it does, yes. The point is that when you clear them later, the browser has a static guarantee that that float will not matter anymore. So the complexity due to the float is only relevant until the clear, and it can parallelize before and after the clear.
Wired.com made (web) history by being one of the first major sites to deploy a pure-CSS design, which was based on floats because that's all that worked back in the days of Netscape 4 and IE 4. The web design community now primarily uses floats for multi-column design because of its historical browser support. However if you don't need to support IE7 and lower, floats are one tool among many for multi-col layout, and not necessarily the best. Flexbox will only add another tool to that toolbox.
Does the lack of a multi-threaded layout engine really matter? Don't pages load fast enough already? And if layout performance really is becoming a problem, particularly in cases of dynamic DOM manipulation etc, isn't that largely the fault of the spec? Even shiny HTML5 is hardly designed for fiendishly complex UI layouts... it's an abomination for programmers and designers alike, and always has been. To this day it still requires herculean effort to achieve things that are trivial with real UI toolkits or publishing tools.
I guess now we're in this hole we best keep digging
I think that it's clear that any technology that helps the browser better compete with native applications in terms of performance is worthwhile.
Regarding new specs, we're of course working on that too, with specifications like flex box and grid layout. Part of Servo's goal is to steer the conversation toward what can be done to make future CSS specs parallelizable on CPUs and GPUs. But of course we want to be fast on existing Web content as well.
10x is a silly exaggeration. As for your general point, you'd be surprised: often consuming more power per second is much better for power efficiency than consuming less power if that power consumption lets you complete the task faster. That is because the wins of getting back to the CPU's low-power idle state quicker dominate everything else. It is almost always better to use the power available to you.
HTML layout is not super-fast on low-end phones, and due to marketing stupidity those phones have 4-8 slow cores instead of 1-2 fast cores. I imagine Servo could do well there.
Drastically changing the HTML standard would take even longer than developing Servo.
Yes, this is also part of Servo's strategy apparently: spread work across multiple cores, but each core does less and can be more efficient in terms of power draw. Faster + less battery use in one fell swoop. Though it's not really clear to me why this actually results in an efficiency win over maxing out a single core.
The way I've heard it, as a very imprecise rule of thumb, increasing clock speed scales power consumption quadratically, while adding CPUs increases power consumption linearly. So, very roughly speaking, one 1.6 GHz CPU uses twice as much power as two 800 MHz CPUs.
The equation for CMOS switching power (as opposed to leakage power) is
capacitance * voltage^2 * frequency
but the voltage limits the clock frequency. The exact scaling of maximum clock frequency with voltage depends on the circuit and the process, but from taking a glance at the voltage tables it looks like the main cores in current Snapdragons are generally running in the 0.8-1.2v range across their entire frequency range.
Most of the power differences between cores these days isn't in terms of clock speeds but in terms of extra structures that cause more instruction to be executed each cycle. The rule of thumb involving voltage/speed scaling is that your power use is indeed the square of your performance more or less when you're in a reasonable region. However, a relatively simple in-order core like an ARM A7 might also only take 1/4 the energy to execute a given instruction of a complex out-or-order core like an ARM A15 even when both are clocked at 1 GHz on the same process.
> Though it's not really clear to me why this actually results in an efficiency win over maxing out a single core.
i heard the whole multiple core thing is spurred in part by manufacturers trying to increase fab yield: if one of the cores on a multi-core chip is bad, it can still be sold, albeit at a lower price, rather than thrown out.
I'd like to see a memory usage comparison with Servo, since mobile devices have far less memory than desktops.
On the other hand, I believe that HTML layout can be done, using only a single thread, much faster and with far less memory than current mainstream browsers do, by greatly simplifying the code (i.e. remove excessive use of abstraction, using different data structures, etc.) Not exactly the same, but along the same lines as, this related item that appeared here a few days ago: https://news.ycombinator.com/item?id=7457674
You'd be surprised: it's not that easy to beat existing browser engines. The IE trick you mention is pretty trivial. That said, it is possible to win, and one of the great benefits of Servo is that its data structures are simpler than other browser engines' in many ways. But it's not as easy as you think.
why all the effort to build the flow tree? can't any block be rendered in parallel to a buffer and then copied into the correct location once all the widths and heights are known? in the examples given it seems like all blocks including the green one could be rendered in parallel to separate buffers.
Layout is precisely about learning all the widths and heights. No rendering whatsoever happens in layout. And as you pointed out, rendering can be parallelized rather easily.
Actually, on my Mavericks box, Safari 7.0.2 seems to pass Acid2 just fine; the nose even changes color on hover. However, Chrome 35.0.1912.2 pretty obviously fails with the top of the head being messed up and a red line over the forhead. The nose changes color, so that's good. :)
(1) Browser UX research isn't part of the research agenda for Servo at this time. (That isn't to say we aren't interested in browser UX research at Mozilla, just that that isn't being done under the Servo project.)
(2) It helps to ensure that we are embeddable. We want Servo to be an embeddable Web rendering engine for people to use in their own projects.
There's also a chance that they'll adopt the Webkit embedding API, which would make Servo a drop-in replacement for any software that currently uses Webkit. Though it hasn't been decided for certain whether to pursue this route.
Mozilla keeps saying it is experimental and not intended to replace Firefox, but it sure seems like they're putting a lot of effort into it and Rust...
That said, I don't think it could be ready for years if ever.
Posts like this always make me want to play with a layout engine trying to implement various features. Is there a toy layout engine made specifically for (self-) education? I couldn't find one. If there isn't, what is otherwise the easiest one to get started with?
Or contribute to Servo! Despite the huge milestone represented by Acid 2, it's still pretty early days and there is plenty of work to go around. The code's on GitHub [1], Josh Matthews did a nice talk about Fosdem that should give you a start on the architecture and how to contribute (video [2], slides [3]), and there is an irc channel (#servo on irc.mozilla.org) where you can ask questions.
WeasyPrint is a layout engine written in Python. It is not written for education nor it is a toy, but I found it easier to understand compared to other engines.
The HTML 5 spec can be implemented from scratch, but my guess is it takes a dozen times more people than implementing a C++ compiler. My basis for this comment is the now-four-years-dormant project I worked on to implement just parsing HTML5 into a DOM tree, which wasn't even inspired by wanting to write my own browser. I was motivated more by having an interest in hacking text layout code, and the need for some sort of syntax for my test cases. Since the project had no commercial goal, it went down a tangent.
I'm impressed that a non-profit has funded two of these projects.
I haven't measured Acid2 layout performance, because it's not particularly interesting; Acid2's CSS is nowhere near the CSS that someone would actually write in the real world. On real pages, from our small amount of testing, we've seen promising results.
Oh, Servo certainly isn't a production-ready browser engine yet, if that's what you're looking for; the incomplete DOM code and network code prevent many sites from working.
More than just being production ready. I'm wondering what lessons have been learned that will actually help current browsers. Seems that servo is in a massive game of catch up, with no guarantees that things will be faster/better/whatever.
If you're looking for guarantees, you may not understand how research works :)
When it comes to areas in which Servo is ahead of current browsers, I can name many: off-main-thread layout, parallel layout, off-main-thread iframes (not out of process to avoid scaling issues), a fully garbage-collected DOM without cycle collection/reference counting or stop-all-threads GC, and, most of all, being written in a memory-safe language. These are all areas in which other browser engines would need to catch up to Servo--though it's unclear how to do that without a complete rewrite, especially for that last one.
I'm thinking there will at least be lessons learned. That lesson may be that going parallel offers no benefit. Which sucks, in that we are hoping otherwise, but it is a possibility. Right?
That is, I am more asking as to what lessons have been learned. Not demanding that we know what progress was made. Since, as you point out, we may not have made any.
Which is to say, I should throw up a huge "I'm not trying to dissuade any of this effort." If I have been too negative in my comments here, I humbly apologize!
Interesting. how does one go about proving a negative?
To support my side, do a Google search for "Joel spolsky Mozilla". Hope that helps. Granted, it was written four months into 2000, but was reflecting murmurings one would read on Slashdot months earlier (back when /. was the HN of its time).
I'd say that the first thing it proves is you don't remember the tech scene in '98. All the comments you brought up in your GP post were reflected loudly by most pundits during that time. Cringley, spolsky, and many others said AOL was doing a fool's errand by allowing the then-unproven open source bazaar model to rewrite the browser. And that they'd be playing catch-up. And that IE would win because nobody would care about the rewrite. Fast forward to 2014, and blink/Firefox fight for first place, while IE continues to lose market share. Mozilla not only caught up, but is now setting the pace.
The second thing it shows is that history repeats itself. The pundits, much like you will prove to be, were wrong. Thanks to the brilliance of jwz, the Mozilla project and the gecko rewrite has outlived both Netscape and AOL. Firefox is a flagship example of how open development can create a superior product that can outlast the companies that make it.
The next thing this proves is that open development is continuing to show that rewriting an engine doesn't require "betting the company" anymore. Mozilla and Samsung are both heavily-invested in Servo, and are expecting this rewrite to be at the core of your future operating system. But if servo doesn't pan out, Mozilla won't be filing for chapter 11 protection
I'm not sure what I'm reading here. First thing to remember, is that Netscape 4 was pretty rushed and terrible. To the point that I remember thinking of how awesome IE was at a few points. And I already had a large distrust of MS.
That is, MS used some underhanded tactics to gain market share. They also took advantage of (and probably forced) a major misstep by a competitor.
That Phoenix/Firefox was able to be resurrected from the ashes is a fortunate occurrence, but at no time did that at all come of as if it was planned. I pretty much consider it the "classic coke" of the browser wars. (Remember, phoenix was originally Mozilla's suite, stripped down to just the browser.)
So, yes, it has gone rather well. However, I'm not sure the codebase can afford to survive the death of its stewardship again. The statistics show a clear dominance of "not Mozilla" historically.
To the point that I'm not all sure on what you are basing your claim of Mozilla "now setting the pace." Don't get me wrong, I'm glad it is doing well. It is my browser of choice. However, I realize I am the minority both in my friends/family and statistics.
Which is funny. In my family, the browser choice is either Safari or IE. Depending on OS of choice. In my friends, it is Chrome. To the point that I'm not even clear what lessons are to be learned from these choices, honestly.
So, back to the point. How were the pundits wrong? Did AOL/Netscape somehow come off well by the rewrite? Was it a sound investment? If anything, I would think the continued active development of the non-servo codebase shows that it is sound advice not to bet the company on a rewrite, and that they learned it. Are you really claiming otherwise?
> Of course, firefox is already bloody fast on any site I can think of.
That's because all the sites we can think of are designed to be fast in Firefox (and other current browsers). Once super-fast browsers become standard, we can start designing sites that would be unusable on today's Firefox :)
Semi-relatedly, because Rust is a memory-safe language, there are whole classes of bugs (buffer overflow, use-after-free, etc.) that are impossible, which is great for Servo's security.
As far as I know, websql is deprecated. No work has been done on the standard since 2010. Firefox, Chrome and IE supports IndexedDB. There exists a shim that uses websql or indexeddb, depending on what's available.
Did you even try that shim? it's broken,and i'm not interested in using a slow nosql db where i'd have to do manual join or write javascript for queries to persist my complex datas in my offline apps
Which one? There are several. Then don't target Firefox OS. It's not like Android or iOS is going to drop support for webSQL anytime soon. Me personally, I've written an orm around localstorage, work's everywhere :)
I'm going to have to apologise and eat humble pie because, unbeknownst to me (and found through your link), the HTML standard actually has a section called "Offline Web applications" [1].
Offline and Web have always been very funny to me. Why should it be the Web's responsibility to make offline better, it is the Web after all -- the epitome of online connectivity. If anything, it should be the mobile companies responsibility to have better offline support. Maybe they could start by not choosing to arbitrarily wipe Web browser data as the first step of reclaiming memory for the operating system (I'm looking at you, Apple)?
[1] https://github.com/mozilla/servo
[2] https://en.wikipedia.org/wiki/Rust_%28programming_language%2...