1. Google builds a new process architecture into Chrome as a product differentiator. (It was a major part of Chrome's initial marketing)
2. WebKit 2 is built (mostly by Apple?) to bake the same type of architecture straight into the core framework -- anyone using WebKit can use it and get the same security/stability benefits.[1]
3. Google says that the pain in maintaining their separate, non standard, process architecture is too much of a burden to continue to contribute into WebKit proper, so they must fork.
Why can't Chrome implement WebKit 2? Are there major advantages to Chrome's process model that are not present in WebKit 2? Is there a reason why WebKit 2 cannot be patched to provide those advantages?
Maybe because anyone who has used Safari knows that WebKit2's multiprocess architecture is worse in practice. As a full-time Safari user I can tell you that things have gotten markedly worse since it went multiprocess. Pages go white momentarily (not crash) all the time, and on top of that they also crash all the time. Additionally, I believe that unlike Chrome, Safari has ONE separate process for all the tabs, and the render process (whereas Chrome tries one-process per tab -- way better in my opinion).
Just out of curiosity, why do you think WebKit2 is the "standard"? Just because they named it "WebKit2"? Had Google named their multi-process version "WebKitB" would it be equally standard? It certainly came FIRST (I think it took WebKit2 years before it had an answer).
I think this is a success of open source. Was the creation of WebKit a failure of Mozilla open source? Of course not. Sometimes you need to just actually test two ideas instead of discussing them.
EDIT: Upon further inspection, I think current Safari isn't even using WebKit2. The wiki still says it should be considered a "technology demo", and exists largely in parallel to WebKit. So whatever multi-process thing its doing now I guess is separate? It's not clear to me.
EDIT 2: I guess it is using WebKit2, so my criticisms stand.
Wikipedia says WebKit2 has been part of Safari since Safari 5.1 which I believe was released with Lion, or so. So it absolutely is in production. The major difference here being that WebKit2 is part of the webkit project, so as the rendering engine (which sits atop both webkit1 and webkit2) is improved by all the various parties involved in WebKit's development, it gets better for everyone. Google's fork now means that changes to the rendering engine they make to Blink will no longer have any effect on the WebKit project, which does seem to be a major difference.
I don't know all the history here, but my understanding is that Chrome's engine was never offered in that same sort of way. It was just part of the chromium project, apparently a part that increased the difficulty of integrating with WebKit, not an api intended to be taken by others and integrated into their browser.
> The major difference here being that WebKit2 is part of the webkit project, so as the rendering engine (which sits atop both webkit1 and webkit2) is improved by all the various parties involved in WebKit's development, it gets better for everyone. Google's fork now means that changes to the rendering engine they make to Blink will no longer have any effect on the WebKit project, which does seem to be a major difference.
It also means that the work they do on the rendering engine for Blink won't be constrained by the constraints imposed by support the various WebKit implementations.
Well that's ultimately kind of sad, right? It's certainly not the argument that was made in this blog post, but it certainly is a believable one. WebKit is a giant open source project that is relied upon by many many different companies and Google contributed a lot to that. Now they are gone and all those companies are left continuing to build WebKit with a goal of interoperability between the myriad different clients while Google takes their ball and goes home and only works on their own platforms.
Why is it sad for Chrome development not to be held back by WebKit when each project prefers a different approach?
Personally, I think its win-win: Chrome and WebKit, which have conflicting approaches to a variety of different issues, are free to take their own approaches and prove them. This is a good thing for progress.
> It's certainly not the argument that was made in this blog post, but it certainly is a believable one.
Its actually exactly what the post says when it says "However, Chromium uses a different multi-process architecture than other WebKit-based browsers, and supporting multiple architectures over the years has led to increasing complexity for both the WebKit and Chromium projects. This has slowed down the collective pace of innovation - so today, we are introducing Blink, a new open source rendering engine based on WebKit."
Forking happens all the time and is natural: good since you can go your own way, bad since you no longer contribute to the mainline. Its like speciation in that way, where some group of animals splits off and stops mating with (and hence evolving with) another group of animals.
>WebKit is a giant open source project that is relied upon by many many different companies...Google takes their ball and goes home and only works on their own platforms.
Blink is open source, it's simply a fork of WebKit. That's why Opera is intending to use it instead of WebKit2. It's like Google wanted to take their ball to another court and still let everyone play.
>The major difference here being that WebKit2 is part of the webkit project, so as the rendering engine (which sits atop both webkit1 and webkit2) is improved by all the various parties involved in WebKit's development, it gets better for everyone.
That's in theory. In practice nobody bothered besides Apple.
You're right that Chrome's multiprocess architecture is more mature than the WebKit2 design. I wish we hadn't ended up in a position where we felt we had to make our own. But stay tuned - we have some great stuff coming up.
Like most things, habit probably. Also, I used to work a lot on Safari/WebKit, so I guess sentimental value?
That being said, I do like the way Safari "feels" a lot more than Chrome. I think Chrome is actually quite ugly, and bad from a UI perspective. For example, Safari's overflow tab menu is a much nicer solution than Chrome's insistence on shrinking tabs ever smaller until you can't tell them apart at all.
Additionally Chrome is missing a killer feature I use all the time: zoom, and more specifically, double tap to zoom. I find it hard to read a lot of text on websites, and double tap, centered, zoom, is amazing. Its too bad everything else feels like its gotten way worse in the last 5 or so years.
I also use Safari pretty much exclusively because I like the UI over Chrome, but the one feature I love about Chrome is its tab closing/resizing. When a tab is closed, the remaining tabs resize and shift appropriately so that your mouse is over the next tab's close button. (See http://www.theinvisibl.com/2009/12/08/chrometabs for a nice explanation.)
The tab behavior in chrome is what does it for me too. In fact, you can have so many tabs that you can no longer open the last tabs (the tabs extend past the little full screen or switch user icon).
I also think Safari has a much better solution for handling bookmarking when you have multiple tabs. Solve these two issues and I'd switch to Chrome in a second.
I actually prefer Firefox's approach to multiple tabs: the tab bar scrolls (you can use the scroll wheel). That's about the only thing I like in Firefox anymore though… Been using Chrome because of its multi-user support (tied in with Google accounts) but Safari is soooo much better than Chrome in terms of UI and UX… Latest Chrome betas, with the full integration of the address bar as a full-blown "Google box" (e.g. you start typing, it loads Google Instant Search without the page's classic search box)…
I went and looked at the "Stacked Tabs" flag and it appears to be Windows only.
From Chrome://Flags-
Stacked Tabs Windows
Tabs never shrink, instead they stack on top of each other when there is not enough space.
Sorry, this experiment is not available on your platform.
I meant "smooth zoom" I guess? I can pinch in from anywhere on the page and center in on any particular item at any particular zoom level. Unless I've missed something, Chrome seems to have "integral" stepwise zooming, which also always zooms from the center of the page (vs where my mouse cursor is). It ends up feeling like Safari's old zoom which would just make the individual items bigger (as opposed to actually scaling the page), but I'm pretty sure Chrome is indeed scaling the page, just doing it in a way that I find frustrating and less useful.
Another example, I'm watching an embedded YouTube video which thus annoyingly doesn't allow fullscreen, so I double tap it, and it fills the browser window, and it seems to be smart enough to render at correct resolution at that scale (since it still allows you to choose a higher resolution even though it won't full screen).
To do similar feats with Chrome I have to manually jigger things around.
Safari is the only browser that has decent scrolling performance on a Retina MacBook. All the other browsers stutter really badly as soon as the page has any complexity with fixed elements (Facebook is a big one)
Yeah, that's certainly true. Pretty frustrating to have spent $2,500 or whatever it was on my retina MBP and I can't smoothly scroll websites with more than like 500 DOM nodes in Chrome.
That's a case where Apple has to optimize the heck out of Safari until it is workable on that particular new machine, otherwise it would get terrible reviews and hurt the overall brand.
Chrome doesn't have the same incentives to get that particular machine working well, especially since the next revision of the Retina Macbook probably won't even need special handling since it will surely have a more powerful GPU. They can just wait it out, similar to what happened with the iPad 3 vs iPad 4, where people had to optimize for the iPad 3 and then those optimizations were unneeded on the iPad 4.
> That's a case where Apple has to optimize the heck out of Safari until it is workable
Actually, they "merely" leverage their platform by using CoreAnimation, making it GPU accelerated (and enabling pinch-to-zoom), while Chrome scrolling hits the CPU really hard.
I can confirm what tolmasky said about WebKit2, yet I'm also a Safari user. The interface just feels better for me, and Chrome has actually been getting worse in this respect. When I stopped using it, a few versions ago, it had started feeling more and more like a Chrome OS VM than like a browser on my Mac. I doubt they've changed course since then.
It's a complex question. To be stunningly reductive: the architectures are simply quite different. We hook into the network stack in different places, we have different sandbox models and constraints (Win XP for instance), etc.
Also note that the timing is fairly important: Chromium was quite far along with our implementation when WebKit2 was announced, and rather than iterating on the solution we'd proposed and run with, Apple created its own framework. That had advantages and disadvantages.
More generally, I'd point to the Content layer as a better integration point: Opera, for instance, is building on top of our multi-process architecture successfully. Chromium Embedded Framework (https://code.google.com/p/chromiumembedded/) is another example of how other projects can leverage the work we've done.
As long as we are recapitulating history - the main reason we built a new multiprocess architecture is that Chromium's multiprocess support was never contributed to the WebKit project. It has always lived in the separate Chromium tree, making it pretty hard to use for non-Chrome purposes.
Before we wrote a single line of what would become WebKit2 we directly asked Google folks if they would be willing to contribute their multiprocess support back to WebKit, so that we could build on it. They said no.
At that point, our choices were to do a hostile fork of Chromium into the WebKit tree, write our own process model, or live with being single-process forever. (At the time, there wasn't really an API-stable layer of the Chromium stack that packaged the process support.)
Writing our own seemed like the least bad approach.
If Google had upstreamed their multiprocess support, we almost surely would have built on it. And history might have turned out differently.
I'd also add that I disagree with Mike about the architectures being really different. In fact, they are quite similar in broad strokes, but with many differences in details (and with the significant difference that the Chromium model isn't in WebKit per se).
I don't understand this claim. WebKit2 was landed with effectively no notice and no attempts at collaboration. I saw repeated attempts to work on a shared architecture in WebKit2, but none were reciprocated. http://goo.gl/KH1Sr Eventually all non-Apple contributors were cut off entirely from WebKit2 as a matter of policy. http://goo.gl/iTDAR
We talked privately with particular Chrome folks before we started (as described upthread), in the middle, and shortly before landing to mention that we were landing soon.
I don't know if the contents of these conversations were ever shared with the whole Chrome team as som Chrome people seemed super surprised at our announcement.
It is true that when we announced our effort, it came with a rough working prototype and not just an empty directory. Basically because we did not know if we could do it until we tried.
BTW I am not trying to pick a fight here. I think mikewest's comment gave the impression that Apple built a multiprocess architecture out of cussedness or NIH. But that's not how it was.
Google had the right to make their choices and we had the right to make ours.
>We talked privately with particular Chrome folks before we started (as described upthread), in the middle, and shortly before landing to mention that we were landing soon.
Yes, I'm aware of that, but the work had been underway for a long time and was about to be dropped by the time there was a real heads up. So the core of the architecture was already being frozen from a larger perspective.
>BTW I am not trying to pick a fight here. I think mikewest's comment gave the impression that Apple built a multiprocess architecture out of cussedness or NIH. But that's not how it was.
I don't interpret Mike's comments that way. Chromium's architecture was public and available, but we assumed it wasn't used because it didn't fit the needs of WebKit2. There's no malice in that. We designed Chromium from the beginning for SFI (as Adam tried to convey), and that incurs quite a bit of complexity. I'm comfortable that the divergence was simply a result of different needs. I just don't see how it could be presented as something malicious or anti-collaborative.
>> We talked privately with particular Chrome folks before we started (as described upthread), in the middle, and shortly before landing to mention that we were landing soon.
> Yes, I'm aware of that, but the work had been underway for a long time and was about to be dropped by the time there was a real heads up. So the core of the architecture was already being frozen from a larger perspective.
Are you aware of the earlier conversation that occurred before we wrote any lines of code or even had a name? Where we talked about the possibility of just using Chromium's model if Google was willing to contribute it back? I have mentioned it twice - maybe you overlooked those parts of my remarks.
> Chromium's architecture was public and available, but we assumed it wasn't used because it didn't fit the needs of WebKit2. There's no malice in that. We designed Chromium from the beginning for SFI (as Adam tried to convey), and that incurs quite a bit of complexity.
It had nothing to do with SFI (which wasn't brought up at the time) or complexity. It was for the reasons I stated upthread.
>Are you aware of the earlier conversation that occurred before we wrote any lines of code or even had a name? Where we talked about the possibility of just using Chromium's model if Google was willing to contribute it back? I have mentioned it twice - maybe you overlooked those parts of my remarks.
The Chromium code is all in a public repository and was already integrated into WebKit via Chrome's platform layer. Members of the Chrome team were also interested in helping better incorporate Chrome's model into WebKit. So, I must be misunderstanding you, because it seems like you're suggesting that you expected Chrome engineers to simply do all the work.
>It had nothing to do with SFI (which wasn't brought up at the time) or complexity. It was for the reasons I stated upthread.
I still don't get what that reason is supposed to be. Regardless, the resulting WebKit2 design was clearly incompatible with the existing Chrome architecture. And the fact that they were continuing to diverge and place a burden on both projects was a clear problem. This was raised repeatedly, but never seemed to receive any serious consideration.
My interest in this thread was only to report on some history that I knew about personally, to correct what I thought was an incomplete version of events. I think a bunch of people found that information useful and interesting.
I regret that this thread has turned into such a back-and-forth. It's not my goal to detract from the Blink announcement. I feel like it would be rude to leave you hanging on mid-thread. However, I feel like:
(a) You are trying to argue with my version of specific events where I was present in person and you (as far as I recall) were not.
(b) You are trying to argue with my stated motivations for decisions that I was part of and you were not.
(c) You seem to want to assign blame.
Maybe my impressions are wrong. But given this, I find it hard to reply in a way that would be constructive and would not further escalate. I hope you will forgive me for not debating about it further.
Yeah, I didn't intend this to turn into a back and forth. I think we both had similar intentions, but from different perspectives and with different first-hand information that may not be well communicated. I agree that it's best to leave it be from here. I think both of us and our respective projects bear no ill will and desire the best for the Web as a whole, despite differences in how exactly to get there.
A request for clarification on why they refused was posed to a Chrome engineer in todays Blink Q&A (http://www.youtube.com/watch?v=TlJob8K_OwE#t=13m34s), and according to him the request for integration came shortly after Chrome was released, and the reason for their refusal was the sheer scale/complexity of the task.
In light of this, your initial "if Google had only upstreamed their multiprocess support...we almost surely" and reiterations on this point within the thread do seem a bit like PR sleight of hand, since out of context it implies willingness to do so was the only issue on their part.
1) The answer wasn't "we'd like to do this but we're super busy right now, how about later" or "that's super complicated, will you guys put in a lot of the effort". It was a pretty direct no. We would have been willing to do much of the work.
2) My recollection is that we talked about it around a year after Chrome was released.
Chrome Beta release date: September 2, 2008
Date of WebKit2 announcement: Thu Apr 8, 2010 (after <1 year of development)
I don't have records of the meetings where we walked though.
3) Does the reason for saying no affect whether our choice to make our own thing was reasonable?
With regards to 3), yes it does affect it since a "flexible/malleable no" is an altogether different constraint from a "solid no", so the solution would be measured against a different yardstick in the former case. Pressing on with your own thing in the interests of time to market (and thereby further cementing the more mature implementation not being integrated) in that scenario does strike me as less than ideal/short-sighted.
This is somewhat moot with 1) and 2) being the case (or at the very least strongly perceived to be the case on your side). At any rate neither side being able to settle on a single version of events signals a communication problem, which makes the whole value of this hypothetical joint undertaking fuzzy anyway.
The way I read it, they didn't want to integrate until they got an official 'yes' (quote: 'our choices were to do a hostile fork of Chromium into the WebKit tree').
Do you actually know what you pretend to know, or are you holding a grudge against Apple for an old transgression with KHTML that they've since done did a great deal of work to put right?
Hi Maciej. Sorry if my comments read as though I was implying that you were wrong or bullheaded to choose WebKit2. That wasn't my intention; there are of course good technical arguments for choosing either architecture, and I'll choose my words more carefully next time the question comes up.
Thanks, Mike. And sorry also if my reply was too lengthy or pedantic or otherwise out of place. I feel bad for getting into a back-and-forth about this.
I'm no stranger to open source, but can you please explain what a "hostile fork" is? Especially in this context, it just seems like diction for the sole purpose of making Google look like they were 'in the wrong' in that situation.
If we took Chromium's multiprocess code and put it in the WebKit tree after the Chrome folks specifically said they did not want to do that, that would have been super rude. Don't you think? That's why I say "hostile fork". I am judging our own path not chosen, and do not mean to cast aspersions on Google's actions.
To be clear, I do not consider Blink to be a hostile fork. I wish the Blink developers good luck & godspeed.
A hostile fork is one done unilaterally, generally without consultation or the blessing of the main project. It generally causes acrimony and community fragmentation, and usually no code changes are shared between the forks after the split.
Compare to forking to solve a very specific or specialized problem that doesn't make sense to merge upstream, like a set of changes that only apply to a very narrow audience or esoteric use-case. In such a case, it's common that changes that do affect the main project are still merged upstream and special care is done to make sure the forks don't diverge too much.
You could see the history between xMule and aMule for a hostile fork. Or the history between ffmpeg and libav, where some contributors where denied access to the repo.
I always believed (hoped) you guys would never be corrupted and do the right thing for the world and not be swayed by organizational affinities / commercial gains.
You have a greater responsibility than to your company or country to keep the web open.
Think on that before you have a further public 'kiddy debate' on she said / he said and throwing the rattle out of the pram.
A stunningly reductive question deserves to be responded to in kind :)
I don't know much of this history, but was integrating Chrome's processor model into WebKit feasible at the time that WebKit2 was built? What were the reasons for not doing so?
Generally, I think WebKit2 and Chromium simply disagree about where to hook into the platform, and what the responsibilities of the embedder should be. The description at http://trac.webkit.org/wiki/WebKit2 is written from Apple's point of view, but I think it's broadly fair.
The position we're taking is that the Content layer (in Chromium) is the right place to hook into the system: http://www.chromium.org/developers/content-module That's where we've drawn the boundary between the embeddable bits and the Chrome and browser-specific bits.
Regarding the history, I'd suggest adding some questions to the Moderator for tomorrow's video Q/A: http://google.com/moderator/#15/e=20ac1d&t=20ac1d.40&... The folks answering questions there were around right at the beginning... I only hopped on board ~2.5 years ago. :)
> Google says that the pain in maintaining their separate, non standard, process architecture is too much of a burden to continue to contribute into WebKit proper, so they must fork.
Actually, they didn't say that the pain of maintaining their own work was too much of a burden, they said the pain of maintaining their model within the constraints and obligations WebKit had was too much of a burden (and, imposed too much of a burden on WebKit with regard to what WebKit wanted to do), so they decided fork so that WebKit can do what it wants to do without worrying about Chrome while Blink does what is needed for Chrome without worrying about WebKit.
> Why can't Chrome implement WebKit 2?
Implementing WebKit 2 wouldn't solve the problem going forward. Changes to WebKit to serve Chrome's needs would still have to meet all of the commitments WebKit has, and Chrome would still be placing demands on WebKit.
> This seems like a failure of open source.
Forking isn't failure. The purpose of open source isn't to create a monoculture, it is to enable groups to share effort when they have common goals, and to allow groups with divergent goals (including where those goals diverged after a period of shared work) to continue to benefit from the shared work without having to start back at square one. This is a success of open source.
Just look up the history of browsers and you'll see that forking is rampant. Forking isn't failure. It's evolution. Once a project reaches a certain size, there exist enough developers in the community to be able to split into two or more self-sustaining communities that can take the project in two or more directions that are valid and serve different needs. It's called specialization.
Apple's current commit approval policies appear to be fairly hostile to non-Apple developers and users of WebKit, and especially to non-Apple users of WebKit2. They now reserve the right to break builds on non-Apple platforms randomly and delay patches to fix them, amongst other things.
"Non Apple Mac ports, if broken by core functionality changes to WebKit2, are now responsible for fixing themselves."
For a cross-platform project, this is a stunning policy change. I don't know what was the chicken and what was the egg, but it looks like this split was inevitable.
Reading the utterly passive tone of the OP, with Webkit somehow "emerging" out of KHTML all on its own, nary a mention of Apple, makes it pretty obvious this move is mostly about politics.
Why is this a failure of open-source? Isn't it exactly what open-source is supposed to do, fork when a project doesn't meet your needs?
I've always found it curious that open-source advocates rail on people that fork or split from projects instead of maintaining code that they believe isn't worth maintaining.
Also: these sorts of deep technical questions would be great for tomorrow's hangout: engineering leads Darin Fisher and Eric Seidel, product manager Alex Komoroske, and developer advocate Paul Irish will be more than happy to answer whatever you can throw at them. Add questions to the Moderator at: http://google.com/moderator/#15/e=20ac1d&t=20ac1d.40&....
1. Google builds new process architecture into Chromium.
1a. Apple asks for it to be contributed back to Webkit.
1b. Google either says "no" or doesn't care but expects Apple to do all the work (note that if Google doesn't do it then this would mean the same functionality being implemented at two different points in the code tree -- webkit and chromium)
2. Apple decides to build its own multiprocess support into Webkit2.
Seems to me that the forking is probably overall a Good Thing, no-one is angry at anybody else, it's just a very important codebase which now has way too many stakeholders.
The name doesn't make it clear, but I believe the answer is "no". My understanding is that WebKit2 is actually just a part of the whole WebKit platform. It's the process model, but all the other parts of webkit (the rendering and javascript, for instance) are the same as they were before WebKit2 was introduced.
The good news is no -blink prefixes! Blink, like Mozilla, will avoid shipping vendor-prefixed features:
Historically, browsers have relied on vendor prefixes (e.g., -webkit-feature) to
ship experimental features to web developers. This approach can be harmful to
compatibility because web content comes to rely upon these vendor-prefixed
names. Going forward ... we will instead keep the (unprefixed) feature behind
the “enable experimental web platform features” flag in about:flags until the
feature is ready to be enabled by default.
Before anyone starts moaning about incompatibility, let's face it: developers start to rely on new features the moment they're added. Differentiating between different implementations is pointless, since CSS's design means you can specify a property multiple times and your browser will only use the one it understands. The current system also excluded other rendering engines who web developers didn't consider.
So what happens when someone introduces a new whiz bang css feature that Chrome handles badly? Something you might, as a developer, want to disable in Chrome, but leave in for everything else? Or any other browser since they're all prone to introducing flakey implementations of CSS sometimes.
All this means is we'll have to go back to the old ways of sniffing out browsers, and I fail to see how that's better.
Nor do I look forward to a deluge of websites prompting me to fiddle with a config to "get the full experience".
Most web developers have very little sway when marketing or clients demand certain things, and this is likely to be something they demand.
>So what happens when someone introduces a new whiz bang css feature that Chrome handles badly? Something you might, as a developer, want to disable in Chrome, but leave in for everything else? Or any other browser since they're all prone to introducing flakey implementations of CSS sometimes.
>All this means is we'll have to go back to the old ways of sniffing out browsers, and I fail to see how that's better.
The situation is no different just now, because -webkit- applies to Safari, Opera and many mobile browsers, not just Chrome. :/
>Nor do I look forward to a deluge of websites prompting me to fiddle with a config to "get the full experience".
Most web developers have very little sway when marketing or clients demand certain things, and this is likely to be something they demand.
"Use a modern browser [actually, Chrome] to get the full experience" is not an uncommon sight these days.
>I was being polite. I'm talking about IE specifically.
Well, this won't help you here anyway. By the sounds of things, features won't be enabled by default until they're ready. Things currently aren't unprefixed until they're ready. The only way to avoid IE if the feature is unprefixed is UA sniffing.
No change.
>Yup, and it's sucky. It's no different to "this site is optimised for Internet Explorer".
It's quite different, actually. Chrome is just quick at implementing web standards, they aren't dictating things and people aren't relying on proprietary APIs.
Hardly. Having a site that has buggy coding which has been tweaked to look good in IE's buggy rendering is a far cry from "our site uses cutting edge web-standard features that your browser does not support, please use a more up to date browser for a better experience". Night and day different.
I do, quite well. Proprietary features are miles away from cutting edge web-standards. Being locked into only one browser that works correctly is very, very much different from being able to chose from among many modern browsers (in the typical case).
More so, as I said that particular problem was not nearly as bad as people targeting the rendering of their site to the particular quirks of one particular browser. That was horrible, but it's not even remotely the same problem we face today.
Ugh, I remember that time, having exactly those arguments. IE having all sorts of non-standard features, so other browsers should probably copy its quirks/bugs also.
To answer your sort-of question, that time was overshadowed when it became obvious that IE6 (and later IE7) was the absolute worst choice. And that's a reputation Microsoft is still trying to clean, years later.
> So what happens when someone introduces a new whiz bang css feature that Chrome handles badly?
You file a bug report on the Chrome (and/or Blink) issue tracker, and it gets fixed.
> Something you might, as a developer, want to disable in Chrome, but leave in for everything else?
Then you use user-agent sniffing to disable it, if you must. The same as you'd do for any flaky implementation of a generally-used feature in a browser. That's not really the notional purpose of vendor-prefixing, anyway (which is about not using the unprefixed name space for things which might later end up with a different standard semantics, not about making it easier for developers to avoid buggy implementation of cross-browser common features).
> All this means is we'll have to go back to the old ways of sniffing out browsers
Or only use features that are well supported across common browsers if you want to avoid browser sniffing.
You had me until that. I've had legit, simple, reproducible bugs sit in browsers for years when they're on "new" features that aren't standardized, yet somehow, every other browser that implements the same feature doesn't have the bug.
Browser vendors use the "not-standardized yet" claim to avoid fixing bugs, while still shipping those features, in my experience.
Another issue with Chrome is that percentage widths are rounded or truncated to the nearest pixel, so creating three 33.3% divs won't fill 100% of the space. It makes a fluid grid difficult to create. I reported this one with the built-in bug reporter, so I don't have an issue link.
> Here's one example that's been open since Chrome 3
I'm not sure that pointing out a "known WebKit bug" (as stated in the linked issue) affecting Chrome for which the Chromium team has a patch that apparently hasn't been implemented upstream is the best example of a problem with the "post on the Chrome/Blink tracker and get the issue fixed" approach, given that that approach was offered as an approach to take to deal with issues arising after Chrome splits from WebKit specifically to stop being constrained by WebKit from making changes.
"Let's face it: developers start to rely on new features the moment they're added."
"So this is a huge win for us, the developers."
There's a contradiction here, and I'm not being cute. Your argument is not the only argument to be made, and personally, as a developer, I preferred the old way.
Yep, there are negative externalities, and that always has to be weighed, but I think it was a reasonable approach to allow developers to weigh the use of cutting edge features against their own communities, needs, and goals. That calculation would change depending on how many browsers were offering a proposed recommendation and the browser proportions of your viewers. Lots of interesting stuff was posted around the web, including here, taking advantage of these things. On the other hand, there's virtually no community for whom it would be reasonable to ask to make config changes to view a site, so you you simply can't use cutting edge features. Experimental site designs will be harder to show off and we'll have less public consideration of the implications of new recommendations since there's much less joy in putting together projects that few people will see.
> On the other hand, there's virtually no community for whom it would be reasonable to ask to make config changes to view a site, so you you simply can't use cutting edge features. Experimental site designs will be harder to show off and we'll have less public consideration of the implications of new recommendations since there's much less joy in putting together projects that few people will see.
Showing off demos leveraging experimental browser features that don't happen to be CSS features has often been done with requests to make config changes, because the "put it behind a config flag until it is ready" is pretty standard for everything other than CSS (and, actually, browser vendors have also done it for plenty of experimental implementations of CSS, whether or not the features themselves are standard or vendor-prefixed experimental extensions.)
>On the other hand, there's virtually no community for whom it would be reasonable to ask to make config changes to view a site
Actually, HN is exactly the sort of place where people might be willing to do so. Things are posted here which require specific browsers or about:flags changes in Chrome, what would change?
Hmm, isn't it true though, for example, that without that we'd have had to wait a long time to get simple things like border-radius that were supported in all the browsers but hidden behind vendor extensions?
Having experimental features available to developers creates pressure for all browsers to move forward. Without that we won't know what is important to developers because they'll be stuck using what the browser maker's deign to make "official".
The 'yay no vendor prefixes!' crew seem to have very short memories on what it was like before vendor prefixes..IE Waiting a LONG time (years) for a CSS feature to be 'recommended' by the w3c.
Vendor Prefixes were the best of a bad situation, I think they will be missed quite quickly.
I'm all up for this movement, but the W3C will need to move a lot quicker if this is to happen. Otherwise innovation will just grind to a halt again like it did before.
Yes! Prefixes are damaging. We're hiding things behind flags instead, which gives savvy developers the chance to experiment without the risk that sites will begin to depend on those experiments.
The parent was likely referring to them being in the stylesheet at all, rather than the way they are managed. Any stylesheet compiler is still going to have the vendor prefixes in the compiled stylesheet.
Since every developer will want it to work in Chrome, this effectively kills vendor prefixes once and for all. It doesn't make sense for Microsoft or WebKit to continue to include them.
So the user would have to enable experimental flags (which will happen approximately never) to see advanced features? And that's supposed to be good for developers? So things like -webkit-box-reflect will no longer be supported in Chrome?
That sounds far worse to me, so I can't even use the new features as they arrive.
I couldn't care less if they add -blink, it takes minutes to push that through a modern CSS codebase. At least I can use the features where they exist.
Standing ovation. This is most welcomed news since Opera's move to WebKit to keep the current browser innovation pace.
Coupled with Mozilla's announcement of its partnership with Samsung to move Servo forward this is great news for the future of the web. Hopefully multi-process/multi-threaded rendering engines will address some of our current performance gripes with the DOM and open the gate for even more complex UIs and interactions.
…And Opera is following Chrome to Blink, as new-Opera is built on the Chromium Content API (mentioned below, but seems significant enough to bear repeating).
Parallelism is listed as something they are considering, whereas with Servo it's one of the biggest (if not the main) reasons for it existing. From this announcement I can't quite tell what it is with this project that they hope to achieve but I'm sure that will become more clear over time.
I know this sounds bitter, but I read this comment and my interpretation (of the ideology, not how you said it) is:
"Our shitty slow and poorly architected mess of document and script languages is too slow for good application performance, everyone, start floundering around looking for some technology to squeeze another inch of performance out of everything so we can maybe hopefully make our awful mess work finally"
It is probably my bitterness towards XML, but I feel like the whole "use-a-document-markup-language-as-an-application-builder" is making people do crazy things, and not in a good way - it is happening because html is entrenched, and it is the only truly device-agnostic framework right now. So everyone tries to make it work for everything.
The argument is essentially: "It took up lots of engineering time and effort to maintain compatibility with other platforms, so this allows us not to worry about other platforms and only focus on our own and that will allow us to move faster."
Hopefully this is easier for everyone. If the chrome multithreading architecture really was the pain point then does that mean that webkit will also be deleting thousands of files and become easier to build, just as the chrome team is removing compatibility for other webkit targets? Or will WebKit remain the mess it is now because it strives to work on all the other platforms?
What if Apple decided to do the same thing? WebKit is this wonderful open source success story in part because it runs in so many places on so many things, but isn't a large part of that only enabled because the main contributors took the time to make sure that their work continued to run on the various different architectures? Will Google turn away patches that enable Blink to be compatible with platforms not their own?
I think Apple has already made a similar decision. A while ago, they decided that they'd allow commits which broke the build on non-Apple platforms and that it would be up to platform maintainers to try and keep up.
From a short-term perspective, monocultures seem good for developer
productivity. From the long term perspective, however, monocultures
inevitably lead to stagnation. It is our firm belief that more options in
rendering engines will lead to more innovation and a healthier web ecosystem.
I find this a doubled-edged sword. Monocultures are said to foster a culture of laziness, apathy. This hasn't been the case with WebKit.
Don't get me wrong - competition is healthy. I'm simply thinking of how Jovinder experiences the web.
If you think WebKit hasn't fostered a culture of laziness and apathy among mobile web developers, try browsing the web using Firefox for Android or IE (on Windows Phone) sometime.
I've lost track of the number of times I've had to pretend to be something different (desktop Firefox, default Android, Mobile Safari ...) to make a website usable. Every time I have a fresh Firefox install on Android, the first extension I install is Phony.
A little out of left field here, but if anyone is interested in working on the other multi-process browser (for OS X at least) I've just released Stainless as open source. Stainless was a hack that actually became quite popular while we Mac users waited for Google to release Chrome for our platform. http://stainlessapp.com
Super excited about this! There was a long discussion on the webkit mailing list after google tried to add support for multiple language VMs in webkit. The goal was to have a native Dart VM.
"Whatever it wants" within reason. We're actually quite concerned about how new features are added to the web platform, and recognize the need to be careful about what we commit to support forever. See http://www.chromium.org/blink#new-features for some detail about the process we're planning on using going forward.
Additionally: we have had experimental Dart+Chromium builds for a long time, they use a different approach (V8 bindings layer) that doesn't require WebKit changes. However we only use these builds for fast development edit+refresh, for deploying Dart code you should use dart2js (it's just like deploying CoffeeScript, C code via emscripten, etc).
Yeah, tone can be hard to guess from text. Figured the link would be helpful either way :)
disclaimer: I'm on the Dart team (libraries, not core language/VM/dart2js). As exciting as it would be to have Dart VM in Chrome, personally I hope the order is more like:
* dart2js and VM work the same way (basically true already, modulo a few quirks unlikely to affect program behavior. It's not any worse than your typical web standard polyfill, probably a lot better.)
* The language spec is standardized.
* People like Dart, and it becomes really popular for building web apps.
* We have great Dart<->JS interop and it's possible to make the two native VMs work nicely together in the same browser.
* The toolchain makes it practically impossible for a web developer to publish an app that only has .dart files, without the .js version that works on all browsers.
* At that point, it might make sense to add the Dart VM to a browser purely a performance optimization.
Of course a lot could change between now and then. For example, if JS engines keep getting faster and introduce enough fast stuff (like typed arrays, asm.js, etc), maybe we can achieve the speed we need with dart2js.
Fortunately, there are plenty of folks that work on Blink/Chromium that share both the enthusiasm and skepticism that the web community has about Dart. As someone that works on it, I deeply want our team to succeed, but I would like to see it happen in the right way--open web and open source friendly.
yeah, if you're running directly on a VM for the language (e.g. JS on a JS engine, Dart on a Dart engine) you wouldn't need source maps for debugging, unless you are using some other tool that is doing source->source transforms for you (e.g. https://github.com/dart-lang/web-ui currently does some dart->dart source transforms). You'd still have source maps for the dart2js output.
I've only read the first dozen of so entries of that thread but it's so depressing…
We've been waiting for Apple to include support for the W3C Navigation Timing for a long time so their pissing match over multi-VM support in WebKit because it doesn't conform to standards rings hollow.
A long time? The working draft is dated January this year.
The multi vm discussion was in 2011. If you think supporting dart natively in a browser is a good thing you're either a google employee or have your head too far up googles ass to see they are the new Microsoft and this is/was an attempt no different than vbscript in ie
If you look at the commits, there's a fair argument that Google "controls" WebKit. You could almost say Apple got KHTML'ed ;)
I believe this is an honest move. This is what happens with software. Goals change, old code and design no longer makes sense, you refactor or rewrite. The architecture of WebKit was created to address goals that are a decade old now. The multi-process nature of Chrome alone, an amazing achievement and really quite elegant if you've looked at the way they bolted it on, was bolted on all the same.
V8 without a doubt re-invigorated JavaScript. When V8 was announced there was a lot of "do we really need another JS engine" arguments. You could argue that other engines were getting fast as well, but v8 got people really thinking about JavaScript outside the browser. I am excited to see what new insights this new rendering engine brings -- and what unexpected positive consequences it generates.
If you want to really put a tinfoil cap on, you could say that by Google contributing to WebKit they are giving Apple a lot of free code, allowing them to devote fewer resources to their browser. Once Blink diverges father from WebKit it won't be practical for WebKit to merge in changes from there.
One has to wonder if Google will be recruiting other WebKit contributors (RIM, Intel, Nokia, etc) to move over to Blink. This would put Apple in a tough spot.
Blink remains very much open source: the repository should be visible in a minute or three.
We're going to be even more transparent than we currently are, actually, about how things get added to the platform http://www.chromium.org/blink#new-features. I'm pretty excited about how that's going to play out with regard to sharing ideas and implementations.
Of course, I wasn't claiming otherwise, but over time porting code to WebKit from Blink will become more trouble than it's worth. They can still learn from it, but you can hardly deny that this move is going to cause WebKit some pain to fill in the tremendous amount of work you guys were doing (the part about this being the reason for the move was, of course, completely a joke.
The two engines will diverge, yes. I think it'll be better for both in the long run, as we simply have fundamentally different architectural approaches to some pretty core problems the engines are meant to solve. There will be short term adjustments on both sides as we get used to the new options that are now available.
I'm honestly quite hopeful, both about Blink, and about WebKit.
The engineering argument is that the differences between Chromium's multi-process model and WebKit2 are big enough that, in order for both projects to move forward, Google needs to fork WebKit. I'm not competent to judge whether this is actually true.
http://trac.webkit.org/wiki/WebKit2 outlines the technical differences between the architectures pretty well. Suffice it to say, the model runs deep, and has real impact on the way things like WebCore are put together.
>Longer term, we can expect to see Blink evolve in a different direction from WebKit. Upson and Komoroske told us that there were all manner of ideas that may or may not pan out that Google would like to try. The company says that forking WebKit will give it the flexibility to do so.
If there is an engineering argument, I'm guessing it's to do with multi-threaded DOM+JS, given the mention of multi-process architecture.
Also, how much does Apple really control WebKit? At a glance, it looks to me like FOSS. (Apple might be the maintainer, but it seems trivial to fork it in a different direction.) Perhaps this is a more nebulous "thought leadership" kind of thing?
b) Although it is a little hand wavey, they do make an engineering argument: "However, Chromium uses a different multi-process architecture than other WebKit-based browsers, and supporting multiple architectures over the years has led to increasing complexity for both the WebKit and Chromium projects."
> Although it is a little hand wavey, they do make an engineering argument:
Immediately being able to eliminate and not worry about maintaining 7,000 files comprising 4.5 million LOC seems to be a pretty concrete benefit, rather than a hand-wavey one.
Chromium uses a different multi-process architecture ... and supporting multiple architectures over the years has led to increasing complexity ... we anticipate that we’ll be able to remove 7 build systems and delete more than 7,000 files—comprising more than 4.5 million lines
Last time I measured (late 2012) the entire mozilla-central repository was 4.488 million lines of code. So I don't believe that by simply streamlining things they'll be able to remove anything like 4.5 million lines. Perhaps an extra zero got inserted somewhere.
> Last time I measured (late 2012) the entire mozilla-central repository was 4.488 million lines of code.
Mozilla isn't WebKit.
> I don't believe that by simply streamlining things they'll be able to remove anything like 4.5 million lines.
I suppose you could compare WebKit repos against Blink repos once the latter is live to see exactly what is cut, but I'm going to say the people working on the code that are the source of the count know how many LOC are involved, and that there direct count is more reliable than a third-party estimate based on a different browser's repo.
I think the implication is that most of the code they are removing is not actually code, but boilerplate/machine-generated build scripts/etc. Still, the number of LOC involved has little bearing on whether or not splitting from WebKit is a good idea.
4.5 million lines spread over 7,000 files is only ~642 lines per file, so I doubt there is an extra zero. Perhaps they included whitespace and comments in their count.
Of course the blog post doesn't go into technical detail. But if you would explore just a single link deeper you would see a pretty good explanation of the changes they're making. It's pretty obvious why they're doing it. It improves and greatly eases the amount of effort required to port/utilize Chromium/Blink not to mention the other benefits from the architecture that they indicate will enable better multithreading.
The Chromium team will be running a Hangout tomorrow to answer any questions that pop up. Hit this Moderator page to ask whatever's on your mind: engineering leads Darin Fisher and Eric Seidel, product manager Alex Komoroske, and developer advocate Paul Irish will be more than happy to answer: http://google.com/moderator/#15/e=20ac1d&t=20ac1d.40&...
WebKit and Chromium are different repositories, and point to each other via a "DEPS" (dependencies) file. We roll new revisions of WebKit into Chromium regularly, and call the process of diagnosing and fixing problems with the rolls "Gardening": http://www.chromium.org/developers/how-tos/webkit-gardening is a good reference.
You mean having full-time engineers handling merge conflicts / upstream breakage / regressions / running test suites? In anything sufficiently large, that becomes a major issue inevitably.
Almost any project worth its salt has someone doing testing. But it is really time consuming.
Multiple efforts can increase competition or spread resources thin.
We already have 3 major engines. Do we need a fourth? The Linux Kernel combines all efforts in a single thriving project, while we have multiple desktop distributions struggling to get a single digit percentage of the market.
One is closed source, so you can't even consider it for adoption. Google is now proclaiming its dissatisfaction with webkit, and Gecko is a 20 year old codebase now, and Google likes control.
Though considering how they fund Mozilla in the first place, if they wanted Gecko, they would just buy Mozilla. They keep the lights on anyway.
That's an interesting way of looking at it. Apple no longer get a free ride. On the other hand, now that Firefox is using WebKit perhaps Mozilla just replaces Google there.
> On the other hand, now that Firefox is using WebKit
Firefox is still using Gecko, and is working with Samsung on a long term effort to develop a new engine (Servo) in Rust.
You may be thinking of the recent news that Opera is using WebKit, but even when that was first report it was identified that they were really basing on Chromium rather than WebKit proper, and they've announced (in this thread, even) that with Chromium moving to Blink, Opera is following.
Sorry I should've been clearer. I am aware of that, but since anyone can use the code, the most important thing for those using it is that what they have tired their product to is actively maintained. If they don't do the majority of the work to maintain it then I think of them as getting a free ride from whoever is actively maintaining it. I don't mean that as a negative, either. Every company gets free rides in some way or another. Web companies off those that pioneered the web, for example.
KDE originally built KHTML. WebKit however is mostly a product of Apple. They made it into the product it is today. Along with help from Google/RIM/etc. But it was mostly them since they needed it to work in Safari and iOS.
So what will the new User Agent string be? "Mozilla/5.0 (X11; Linux x86_64) Blink/537.33 (KHTML; like WebKit; like Safari; like Gecko) Chrome/27.0.1438.7"? Hopefully people will finally start using feature detection rather than user agent detection...
The user agent string is, for the moment, remaining exactly the same format. For better or worse, all those crufty bits are currently necessary for compatibility with sites doing a poor job of sniffing out functionality.
Can anyone from the chromium team answer few questions?
1. How does this affect the build system?
2. Will Blink always remain a fork of Webcore, or do you plan on replacing all the bits and pieces from Webcore with your own code?
3. Are we still stuck with the LGPL license?
4. Does this change anything in the spectacularly lacking source documentation / porting guidelines front.
5. You mentioned stripping out a lot. Will this have a significant impact on the size of the codebase?
6. Will the rendering architecture be changed completely? Or, is the render layer hierarchy still intact in blink?
4. WebKit and Chromium have historically had differing opinions regarding what makes a "good" comment. I think you can expect Blink's code to tend more and more towards Chromium-style as time goes on, but it's not going to happen overnight.
5. ~5 million lines of code that we don't currently compile or run in Chromium. That's a bit, but it won't have much effect on the binary size.
6. Short term, not much will change. Longer term, a few things will probably happen: for instance, the widget tree will likely be removed, and we'll likely be able to step back and reevaluate some changes in light of that.
I mentioned above that the last time I measured (late 2012) the entire mozilla-central repository it was 4.488 million lines of code. How many lines is Chromium? How on earth can you be removing 5 million LOC?
About enabling experimental features via flags - I hope there will be an option buried in there somewhere for curious people to "go nuts" and enable a large slew of functionality in one step, appropriately warned. I can see that being a pain, but much easier than having to go one by one on obscure features with a non-technical audience.
I love seeing what creative devs are doing out on the fringes, and having to dig around in flags every time something new gets added could potentially get pretty annoying. The benefit of vendor prefixes was this - if you were on a latest version, not just dev/canary channel, there was a lot which was turned on by default, even if theoretically it wasn't stable. That was actually quite a good driver of fresh technique and innovation, seeing this straight away, despite the major hassle of bloated CSS.
It's inspirational seeing people who maybe aren't totally technical being able to get their hands on very fresh stuff without having to completely hand hold them on every step required to get it going.
Really, a lot to be said on this topic, but just wanted to mention this as didn't see it discussed yet.
> "For example, we anticipate that we’ll be able to remove 7 build systems and delete more than 7,000 files—comprising more than 4.5 million lines"
On my 2GB netbook, chrome has gone from my preferred browser to unusable due to the high memory footprint of recent builds. I wonder if this cleanup will help get the memory down to something reasonable like where it was up until Chrome 10 or so.
Recently it's been crashing for me left and right. This seems to happen with chrome over the years, ebb and flow of functional non-functional. I just switch back to Firefox during that time.
These bits from the docs are really interesting. Can anyone here explain them in more detail? (I've also posted them as questions in moderator)
"we’d like to explore even larger ideas like moving the entire Document Object Model (DOM) into JavaScript. This has the potential to make JavaScript DOM access dramatically faster"
"Removing obscure parts of the DOM and make backwards incompatible changes to obscure parts of the DOM that benefit performance or remove complexity."
My expectation is that we'll be posting more design docs to chromium.org more frequently. The best way to stay on top of that will be to join the blink-dev group (https://groups.google.com/a/chromium.org/forum/?fromgroups#!...) which will be ramping up shortly.
WebKit2 is a similar, but different, multi-process model. Chromium never compiled it in, but the integration imposed various constraints both on Chromium and on WebCore. That's certainly one of the reasons we've introduced Blink.
If you're going to badmouth this decision, then please at least read the articles and address the reasons they mentioned. It's pretty stupid and pointless to make irrelevant generalist statements.
Wow, that is the most ridiculous thing I've ever read. Pretty sure it's a troll post trying to make frontpage with sensationalist bullshit.
For example, he claims it's a political move, yet the official FAQ lists many practical reasons for the move. Also, he says it will fragment the web: half the comments in this very thread explains why it won't. Then he says it's not open source because it's hard to understand how an HTML parser works? wtf?
>For example, he claims it's a political move, yet the official FAQ lists many practical reasons for the move.
So what? They couldn't make up excuses?
>Also, he says it will fragment the web: half the comments in this very thread explains why it won't.
And others argue why it will.
>Then he says it's not open source because it's hard to understand how an HTML parser works? wtf?
A complex multi-million line project representing 1000s of manyears of work, essentially needs dedicated full-time highly skilled engineers to be forked. It might be technically "open source", but it's not bazaar-style open source, the way something like a simpler program or web framework is.
Even a highly skilled C++ programmer has to spend months to understand the WebKit codebase, much less do any pervasive changes or take over the code. This kind of devotion cannot be sustained by unpaid volunteers. That makes it essentially un-forkable unless some other company can devote resources to it.
Highly complex codebases seldom progress much as community projects after the original company has abandoned the paid contributors (see the lackluster Gnome development the last 10 years, after all the late 199x early 200x backers backed down, Open/Libre Offices --haven't progressed much from 2000's SUN's offering--, etc). And those are the cool cases, others die completely or languish (e.g Hazel's Nautilus, or Evolution).
> A complex multi-million line project representing 1000s of manyears of work, essentially needs dedicated full-time highly skilled engineers to be forked.
WebKit, from which Google Blink was forked includes dedicated, full-time, highly-skilled engineers from Apple and elsewhere that could incorporate material from Blink -- as well as all the other browser projects which might want to use material from WebKit (pre- or post-fork) or Blink; so the situation with regard to "open source"-ness is the same as it was before the fork, even with this "it needs full-time highly-skilled engineers" to use it proviso.
> Highly complex codebases seldom progress much as community projects after the original company has abandoned the paid contributors
Which is one of the reasons that forks between projects where both sides of the fork are paying contributors and the fork results in both sides being able to streamline and more efficiently return value to their sponsors (thus, making the sponsorship from both the prime sponsor of the original project and the sponsor that used to pay people to work on that project but which is now sponsoring their own fork more likely to continue) is good, as it puts each post-fork project on a more secure footing than the pre-fork project was.
Why do you say paid GNOME contributors have been abandoned? Where do you get this idea, and which company are you referring to? Also, what do you mean by "GNOME development is lacklustre"?
?? If there's practical reasons, then you need to explain why the practical reasons are not strong enough to warrant this? Just look at what was accomplished by this, they were able to delete over 8 million lines of code that was boilerplate.
> And others argue why it will.
Then, it's still not a strong argument against this.
> [snip]
What exactly are you advocating? Look at all the chromium design docs they wrote, and all the code reviews are in the open. If anything, the WebKit reviews are much harder to look through. The rest of your rant is irrelevant. Do you want them to spoon-feed you everything Chromium/Blink engineers do?
You can argue all day if you want, but what exactly are you proposing the alternative is? How does not forking help in any of those?
No, it just means that instead of Chrome for Android and Chrome for iOS having different versions of WebKit, Chrome for Android will use Blink and Chrome for iOS will use the system WebKit, as it does now.
So Google will be using 2 increasingly different rendering engines for Chrome in the future? Well that sucks. Apple really needs to allow other rendering engines on iOS, or at least be forced into it. People complain about the "webkit mono-culture", but a huge platform like iOS actually mandating you use webkit, and their single version of webkit, is a lot worse.
They already are different: Chrome on iOS uses whatever version of WebKit UIWebView uses on the platform (which, until relatively recently, was an old fork of WebKit), whereas Chrome on Android nowadays uses a version almost as up to date as Chrome on desktop.
We're a bit worried about compatibility, certainly. One important thing to note is that WebKit has _tens of thousands_ of layout tests; all of those tests exist right now in the Blink repository, and we're certainly going to continue working with browser vendors in general and the W3C to ensure that we can agree on exactly what standards mean, and how they should be rendered.
We'll be coming at the same problems from different angles, find and fix different bugs, and have the opportunity to peek at how other browsers have done things. I'm quite hopeful that will mean that we'll all end up with better implementations.
I agree, that would suck. Specially if -for some reason- we start seeing differences in terms of rendering speed/js engine/or other techs between the android version and the iOS version.
There already are speed differences in iOS alone. The native Safari has a faster JavaScript engine than the sandboxed Safari instances that iOS apps use.
Or something like that. Something about security concerns.
It's the JIT for JavaScript that's disabled. I've never seen an official statement on it but I've read that it's that the security model in iOS does not allow compiling code and then executing it, but Mobile Safari gets a unique bypass for this security. Other apps which merely embed a UIWebView are stuck with interpreted JavaScript.
Opera on iOS is actually Opera Mini, which doesn't run a rendering engine on the device at all.
And no, Apple does not allow other rendering engines so much. Specifically, they do not allow anything that can execute code that is not distributed with the app. So for example, no JS engine that's allowed to run scripts that don't ship with the app, period (even without a JIT).
Nope, no changes for Chrome for iOS: current version already uses UIWebView and will continue to do so. The rest of the infrastructure (networking, etc), is Chrome code.
Chrome for iOS uses a UIWebView for rendering (effectively Safari), but uses the Chromium networking layer, UI (omnibox), etc. Arguably though rendering/JS is a big part of what makes the browser the browser, but it isn't everything.
Yes. Announcing this especially today seems like a "fuck you" at Samsung.
Downvotes? Anybody thinks that two companies that are in some heat over Android/Tizen et all, just coincided to release information about new rendering engines on the same day?
As if rendering engines are announced every other day, right?
This is a conspiracy theory. PR moves take time, so both Servo and Blink announcement probably baked for many weeks. The best explanation is both PR teams thought today is a good timing, probably because it is right after Easter holidays.
Yes. And sometimes conspiracies happen. Conspiracies don't always involve aliens, illuminati or such. Sometimes they are as simple as "let's secretly aid the Contras in Nicaragua". Or "let's do our PR move at the same time to piss them off".
>PR moves take time, so both Servo and Blink announcement probably baked for many weeks.
Which makes it even more technically possible for someone to have learned of the other's date in advance, during all those weeks, and decided to announce his on the same date.
I mean use the monopoly to take the tech, clone, tweak, and then don't contribute back. Thankfully google will keep it open source since they're making money from it quite indirectly. But still.. A "fork you" to apple that would make the web suffer. Douchy.
Isnt "take the tech, clone, tweak, and then don't contribute back" exactly the model Apple used to create webkit from khtml. If memory serves, the khtml team was incredibly unhappy with apple trying to submit giant monolithic and undocumented commits.
What Google's doing now is far kinder than what Apple did then. As noted above, Apple "contributed back" big monolithic patches. By contrast, Google is providing access to a version-controlled repository. The latter is far easier to work with, of course.
I know it might be an unpopular comment, but I really don't like this. I had hoped every browser would eventually use the webkit rendering engine. I have a hard time feeling sorry for those engineers that have to maintain compatibility, when I think of the many frontend engineers that now have to test a different rendering engine :(
Rendering engine monoculture isn't a good thing though. If a single rendering engine dominates, then there is less reason to write standards-compliant code - after all, everyone uses WebKit, right?
As much as I was sad to see it go, I sympathize with their problem. If you just have a single C/C++ implementation with no spec, it's a lot harder to know, as a user/web developer, what the correct behavior is and what you can rely on.
It's a shame WebSQL died, though, since I would have loved to have had SQLite in the browser. Maybe the solution would have been to specify a similar, new, SQL database for the web? I would have liked that more than IndexedDB.
> Maybe the solution would have been to specify a similar, new, SQL database for the web?
That would certainly have been a solution, but I think one of the reasons WebSQL went with "use SQLite" as a shortcut was that it was a lot easier. If you specify a reasonably-implementable subset of SQL, you are stuck with that, and you can't piggyback on SQLite maintenance.
I think the decision to abandon WebSQL for IndexedDB rather than expand the spec into something where independent implementations were feasible was probably based on the cost/benefit perceived with having to maintain a separate, browser-specific RDBMS implementation.
Yeah, it was wonderful! The alternate browsers just accepted the standards dictated by the dominant browser, and that didn't cause any problems at all!
I'll say from personal experience on two recent projects: if you're only testing in one webkit browser and not hitting the others there's a good chance that you're missing a couple of bugs. There are a ton of different ways to build webkit, Chrome and Safari were never identical enough to ignore. Fortunately, after developing in Chrome it took about as much effort to support Firefox as it did to support Safari, which is to say about half an hour.
Well I just killall chrome processes anyway for my desktop to be usable, these multiple and dozens of chrome processes is an absolute nuisance, and it crashes my machine, my desktop, chrome is not even reliable coz it crashes on me everytime. if you want to get pissed bigtime, use chrome.
Google seems on a "replace open with less open" streak, with google reader and caldav shuttering, and now this. The caldav situation especially i cannot conceive as anything but a business decision given that they're keeping it around for those people they like.
With regard to open, Chromium is a nicely open source project, and we're really quite committed to transparency in the project. See http://www.chromium.org/blink#new-features for some of our policies in this direction.
Godwin's Rule Corollary violation:
o Mentioning Google Reader in a discussion unrelated to RSS or Blogging causes automatic loss of debate/argument/credibility.
1. Google builds a new process architecture into Chrome as a product differentiator. (It was a major part of Chrome's initial marketing)
2. WebKit 2 is built (mostly by Apple?) to bake the same type of architecture straight into the core framework -- anyone using WebKit can use it and get the same security/stability benefits.[1]
3. Google says that the pain in maintaining their separate, non standard, process architecture is too much of a burden to continue to contribute into WebKit proper, so they must fork.
Why can't Chrome implement WebKit 2? Are there major advantages to Chrome's process model that are not present in WebKit 2? Is there a reason why WebKit 2 cannot be patched to provide those advantages?
This seems like a failure of open source.
[1]: see the first paragraph on http://trac.webkit.org/wiki/WebKit2