Hacker News new | past | comments | ask | show | jobs | submit login
Why Mozilla Matters (brendaneich.com)
307 points by gkanai on Feb 15, 2013 | hide | past | favorite | 156 comments



I have a hard time believing Brendan's pledge to diversity given Mozilla's non-support for Gecko embedding.

There's a number of Gecko-based browsers, e.g. Camino, and they've left their users in the lurch by ending support for Gecko embedding: http://caminobrowser.org/blog/2011/#mozembedding

Mike Pinkerton has for years been voicing rightful criticism that Mozilla is focussing on Firefox and nothing else, see e.g. this interview of 2006: http://mozillamemory.org/detailview.php?id=7277

If Mozilla really wants diversity, why haven't they made an effort to make Gecko attractive for embedding, thus providing an alternative to WebKit? Instead they've done the exact opposite.


> If Mozilla really wants diversity, why haven't they made an effort to make Gecko attractive for embedding, thus providing an alternative to WebKit?

I think Mozilla's announcement makes their reasoning fairly clear:

https://groups.google.com/forum/#!topic/mozilla.dev.embeddin...

My summary is:

- The old embedding infrastructure was difficult to use and expensive to maintain

- It was also a technical dead end

- Better embedding infrastructure depended on complex architectural changes to Firefox

- Firefox is Mozilla's bread-and-butter, in terms of visibility, influence and revenue

- Mozilla has ambitious goals and does not have infinite resources

Forget for a moment that we're talking about Mozilla, Firefox and Gecko embedding. In a different context, what would you do?


The ambitious goals shouldn't be making a new OS then.


Given the spread of the device APIs that Mozilla was forced to develop for Firefox OS, I think it is clear that Firefox OS has already done more for Mozilla's mission than maintaining the previous embeddings ever could.


"Our mission is to promote openness, innovation & opportunity on the Web."

It's not clear to me that FireFox OS has done anything for Mozilla's mission. Indeed, the Firefox OS home page seems to indicate Firefox's goal is to provide carriers and OEMs with an alternative to Android, and help them maintain customer relationships. WTF? Open source vendor lockin?


Yeah I think it's worth it once we have the rust-based one.

I mean come on, Firefox isn't even distributed in 64-bit as an official option. It's really on it's last legs waiting for the next gust to carry it.


Chrome is 32 bits on Windows, by the way. Suppose it's on it's last legs too, uh!

Oh and that http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/lates...


64 bit Chrome on Windows is a possibility, Justin Schuh is working on it https://plus.google.com/u/0/116560594978217291380/posts/d93X...


There are 64 bit versions of Firefox available for Mac and Linux.


Mozilla appears to have thrown in the towel with regard to an embeddable Gecko. Fortunately, one of Servo's core goals is to be easily embeddable[1], and unlike Gecko can be engineered from the ground-up to make it happen.

[1] http://www.reddit.com/r/rust/comments/171ooy/servo_embedding...


Even when Mozilla supported Gecko-embedding, major devs rejected it time and again over the years, always in favor of WebKit or Presto. Even the Gnome folks had switched away well before embedding was deprecated.

Frankly I think the Mozilla folks learned the wrong lesson from this, but they did learn something.


I think the code base is too hard to embed, stopping the support kinda made sense, since they want to stay relevant with finite resources.


what lesson do you believe they learnt, and why do you think it was the wrong one? they clearly noted that (a) the gecko component was hard and fiddly to embed and (b) fixing it would draw away scarce resources from more critical areas, so they decided to not waste both their and their users' time, and remove embedding support. i think it was the right way to go; as kibwen noted, servo can be designed around embedding support right from the beginning.


Gecko embedding is being worked on as a community project:

https://wiki.mozilla.org/Embedding/IPCLiteAPI

I agree that Mozilla should start supporting it officially. Especially after they decided to kill mobile XUL Fennec off: https://bugzilla.mozilla.org/show_bug.cgi?id=831236

With official Mozilla being too engrossed in Android, this remains the only alternative for normal mobile Linux to use Gecko.


Well that seemed like a very well-reasoned and well-written article without. I found myself nodding along with a lot of it.

The part about there not really being "One Webkit" is the bit that stuck out for me since it seems (warning: generalization ahead) most outsiders (even web-developers based on articles and comments I have observed on HN) do believe that Webkit is mostly one engine worked on by one basically happy family of corporate sponsors.


I might be behind the times, but the thing that was by far the most interesting to me was Servo [0]. He's absolutely right - all of the current rendering/JavaScript engines are going to need significant work to take advantage of massively-parallel hardware in the next few years. (I say this after seeing the work required to port ordinary CPU code to CUDA. OpenCL should be similar.) The idea of Mozilla not just switching to WebKit but actually trying to leapfrog it to a new, even better rendering engine is awesome.

[0] https://github.com/mozilla/servo


And of course, with servo being written in Rust, that makes it doubly interesting.

"To make an apple pie from scratch, you must first create the universe." - Carl Sagan


Mozilla had their shot at being the leading edge browser, and, heck, maybe they even were for a while. But at some point in the history of Mozilla / Seamonkey / Firefox they started to care way too much about "market share" and started treating IE as though it was the target. Discussions about implementing new standards or new features ended with "but IE doesn't do that", etc.

And then Mozilla became very inimical towards the idea of implementing forward looking things, and giving ideas a chance to win or lose in the "marketplace" (not a perfect metaphor, but whatever). Remember the debacle over mng support? The argument basically reduced to "nobody's using it" at a time when no browsers supported it. And nobody was going to use it, until some browser(s) did support it. But IE didn't, and IE was the target. Oh yeah, there was some argument about the size of the code, etc., yada yada, but even when the developers reduced the size of the mng code dramatically, it was still rejected.

Mozilla leadership never seemed willing to try and lead and expect IE to begin to see Firefox as the target. XForms? Mozilla absolutely should have implemented XForms properly. The one place they did actually get "ahead of the curve" a bit was SVG, and - while still hardly dominant - SVG has finally made into IE.

Honestly, while Firefox is a great product in many ways (I'm using it to compose this post) I think it has fallen short of what it could be, and one reason for that - IMO - is an overly dogmatic, top-heavy leadership model and lack of willingness to incorporate feedback from the larger OSS community.


You're trying to make a technical argument and referencing MNG and XForms. I'm really not sure where you're going with that.


The point is that it doesn't f%!@#ng matter... Mozilla, at one time, tried to foster this idea that they were the "browser for developers" and had this notion of implementing new technologies and letting them fight it out in the marketplace, yada yada. But they didn't actually do that. They made heavy-handed decisions about what technologies would or wouldn't "win", mostly based in trying to replicate IE. And now they're still talking about "innovating faster" blah, blah, and I'm still not buying it.

Mozilla has shown little interest in leading in terms of browser innovation, from what I've seen. Of course, the argument now has probably changed from "IE doesn't do that" to "Chrome doesn't do that", but whatever...

Edit: Also, just to be clear... I'm not saying Mozilla never do anything innovative. They were, for example, one of the first, if not the first, browsers to support MathML. They were also early to the SVG party. I just think they fell short of what they could have been, if they'd been more aggressive towards incorporating new things. shrug


If you don't like monocultures, you shouldn't have hated plug-ins. And Firefox OS should allow users to install browsers other than Firefox.

I'm very sad that not only Flash 11 but also Silverlight and Unity Web Player are all doomed by the enthusiasm of the "open" standards. Therefore, I feel that current Mozilla is full of hypocrisy.

And Mozilla, you must implement the Web Audio API ASAP. Without this, all interactive "HTML5" demos with audio will be developed only for WebKit browsers. "We innovate early, often"? I don't think so.


> If you don't like monocultures, you shouldn't have hated plug-ins.

Plugins are each a monoculture. Flash, Silverlight, etc. - these are not multiple implementations of the same standard (like WebKit and Gecko are). Plugins are each a single implementation of non-standard.

> And Firefox OS should allow users to install browsers other than Firefox.

Firefox OS is really just the Firefox browser and minimal stuff around it to make it run. I'm not sure I see the point of allowing installation of another OS from it (replacing firefox there means replacing basically everything).

> I'm very sad that not only Flash 11 but also Silverlight and Unity Web Player are all doomed by the enthusiasm of the "open" standards.

None of those are doomed, except for Silverlight which Microsoft decided to discontinue. And the thing that is dooming Flash is not open standards, but iOS which did not allow it to run, which eventually made Flash irrelevant on mobile.

> And Mozilla, you must implement the Web Audio API ASAP.

Of course, work on this is well underway. You can follow here,

https://bugzilla.mozilla.org/show_bug.cgi?id=779297


You've given some specific plugins which are monocultures but plugins in general democratize the web by allowing things like Unity3D to flourish. Without plugins, web developers are limited to a blessed set of capabilities offered by the browsers, with no ability to extend the set of capabilities. Let's consider video codecs: without plugins or some way to run near-native code, innovation on the codec front must happen in the browser itself.

So I think there's truth to NinjaWarrior's argument that plugins protect from monoculture.


That's a different form of diversity, but yes, plugins do give more options. And they are helped by running in browsers.

But they are security risks and cause lots of problems for browsers, as well as the monoculture issue (I can't run Flash on linux anymore because they decided to deprecate the flash linux NPAPI plugin).

So I don't think browsers should promote them. But native apps are still fine for them - Unity is flourishing especially on mobile, far more than on desktop browsers; Unity ships native apps on mobile.


Arguably, if asm.js (and perhaps WebCL) takes off, you'll be able to implement custom codecs at near-native speed without any need for a download or the risk of plugins...


The Web Audio spec is more a user manual than a spec, to be fair. We are implementing the API at the moment, but the spec is so unclear that it is impossible to implement it properly. We have a couple things done and working, though.

And about "We innovate early, often", keep in mind that we had a counter-proposal to the Web Audio API, that we implemented. The W3C has chosen Google's instead of ours, and I believe that the fact we are late to implement this particular spec is quite normal.


I happen to be working on my own implementation of Web Audio in Haxe(for easy portability into Flash or outside the browser). I'm still at an early stage and admittedly am heavily guided by the Webkit implementation - while working on parameter changes I had to stop and take notes on their code to see exactly what was going on. What parts of the spec do you find most difficult?


So why did the W3C choose Google's audio proposal? Was there some kind of political wrangling going on there or were there purely technical reasons? I've wondered about that for a while.


The Google proposal was more full featured than the Mozilla Audio Data API. It provided more functionality baked in vs having to implement it in JS. This was considered important for mobile devices.

Mozilla countered with another proposal, the Stream Processing API (https://dvcs.w3.org/hg/audio/raw-file/tip/streams/StreamProc...) but this was a bit too late.

Another advantage of the Google proposal was they had a spec and implementation from the start. The Mozilla proposal had a wiki page write up and an implementation and I don't think there was as much effort put into promoting it as Google did into theirs.


Because it was the better API? Game devs all over the net have praised the Web Audio API

The Mozilla proposal required audio processing in JavaScript which in some ideal world is a great idea but in the real world where you want audio running on Intel ATOM and mobile processors and still have time to run other code is the not such a good idea.


If Flash, Silverlight, and Unity Web Player were based on open standards and they each had as many competing implementations as there are web browsers, then they might be a real way forward. They're each useful in their own way, but their application is limited because they're each locked to a proprietary implementation.

Granted, Silverlight was "open" in the sense that it had defined specs, but the specs were clearly written by a single company, and competing implementations would be at a significant disadvantage.


Adobe/Macromedia tried to take a step into standardisation for Flash ( ES4 ) it failed ,because some were pushing crappy alternatives ( Microsoft and Silverlight ) and Mozilla had cold feet ( tamarin ... ). Funny how some now try to shove you typescript when we could have had a better language back then. It doesnt mean that we will still be writing some javascript in 10 years ,hopefully we wont.


Monoculture means here that everyone is running the same implementation. Flash 11, Silverlight and Unity Web Player are all examples of that.

Allowing opaque plugins encourages weakly specified technologies with just one defacto implementation and leads to monocultures.


The Web Audio API is Chrome-only (or I guess it shipped on iOS 6 now? I still can't use it in Safari on my desktop PC) because Google wanted it that way. The API is incredibly poorly specified and poorly designed.

HTML5 games have been able to use <audio> to play sounds since before the Web Audio API existed, but Google never bothered to actually make it work right in Chrome. They were too busy coming up with their own thing.


Game devs all over the net have praised the Web Audio API. Abusing the audio tag was never the correct solution for games. The audio tag was designed for streaming long files, never for sound effects. Just because people were able to abuse it into game use doesn't suggest that was a good solution.


There's nothing fundamentally bad about 'new Audio("effect.wav")' and it performs stunningly well in both Firefox and IE and has for a long time. The one real 'downside' - prebuffering - is absolutely trivial to compensate for and the code to do it in JS is not any larger than the necessary scaffolding to make basic use of Web Audio.

Chrome was always the loser here; I won't speculate as to the reasons why, only state that for multiple versions, playing simple sound effects with Audio instances had severe latency, audio quality issues, and even crashes. Why wouldn't devs praise the Web Audio API if the only alternative was broken? I certainly was happy when I finally managed to get my HTML5 games to play audio in Chrome without crashing, but that doesn't mean the Web Audio API is well-designed or that it was a good idea for me to have to write Chrome-specific code just to play sound effects.

I'll acknowledge that Web Audio's design probably delivers better runtime performance in scenarios where you're playing dozens or hundreds of non-streaming sound effects, but I still question the motives of anyone who prioritized that over having basic support for audio working correctly.

EDIT: Because I forgot to mention this - I mean seriously, you put a method call in the Web Audio API that blocks the UI thread to synchronously decode audio? In 2012??? And didn't cover the documentation in 'don't use this' warnings?


The web audio API has sample-accurate scheduling which, in our games and this modern age of procedural audio, is critical.


The main point of the web audio api is for procedural generation of sounds rather than playing of pre-computed buffers or files. This is why "new Audio('effect.wav')" isn't a real alternative to it.


More to the point: As soon as you want to mix audio levels, <audio> alone is completely insufficient. You need mix control in order to fade background music and loops. I consider this a basic need; the Mozilla API allows you to roll something yourself, but Web Audio is more clearly suited to the application.


Mozilla had procedural audio generation through <audio> with a simple, documented API back when Chrome couldn't even play non-procedural audio through <audio> properly.

http://phoboslab.org/log/2011/03/the-state-of-html5-audio

Procedural audio is also effectively a corner case compared to the vast majority case, which is playing static sounds.


Simple, documented, and slower - you had to do everything in JavaScript. Web Audio API retains the ability to use JavaScript but provides a nice set of fast primitives.


When you still need to explain why Mozilla matters, despite a market share of ~20%, I guess it's already doomed...

Practically nobody uses a browser, or any other product, only to "fight monoculture". It may be an honorable goal, but a product needs compelling advantages beyond that.


The reason I use Firefox over Chrome is mostly down to memory usage with a large amount of tabs open. Chrome really starts chugging when you get over 40 tabs (and it's a pain to organise them too). It also has some really weird caching problems which you only seem to stumble upon if you're doing a lot of web development.


That's a bit ironic. I remember not so long ago I've heard many people who switched to Chrome were deeming Firefox "memory hungry". I haven't heard similar complaints about Chrome up until now.


Firefox improved quite a bit with the more recent rapid release cycle. And don't forget the "organise" part. Once you get beyond a certain amount of tabs in a window, the usual tabbed arangement shows its weakness -- and at this point, you probably need something like XUL to expand the GUI, what Chrome/Safari have to offer in this regard just doesn't seem to cut it.

TreeStyle Tabs is basically my #1 reason why I'm sticking with Firefox.


> TreeStyle Tabs is basically my #1 reason why I'm sticking with Firefox.

Tree Style Tabs is, of course, one of the greatest things ever to happen to Firefox. It's on the list immediately after FireBug.

These days, I have been switching over to xmonad (and other simple window managers). Instead of using tabs inside of a browser, I use windows for each page. Next, I have a global key binding (C-o) that brings up windows as I type out tags which I can set with C-j. This way, I don't need tabs anymore, and I don't need impossibly long lists. I can just type the thing I want. C-o mdn, done. C-o google, done. C-o gmail, done. C-o irssi, yep.

I am still trying to figure out if this is better than constantly seeing a list of open tabs.


That's a bit similar to the way I did it way back when Firefox came out. One of the early window managers that supported this, probably pwm. And it does work quite okay as a simple tab substitute for most applications.

For browsers, I would need some kind of hierarchy support, though. A bit harder to do in a window manager, but then you'd have it for all kinds of applications. Maybe even combine it with some additional exposed information -- so without support you manager your hierarchy yourself, but if there's an easy way to get a buffer list, the top-level would be from the wm, and the second level from that list.

I would need some kind of display for a browser, though. I'm fine with on-demand buffer lists in editors (and actually turn off sidebars when I use Sublime), but for my browsing habits I'm better off with a list that's always visible.


FWIW, you xmonad technique is easily emulated in Firefox. Try C-l mdn, etc. The awesome bar really is awesome. Not only does it search open tabs, but it also searches your history and titles and it learns.


It depends on the usage. Users have different habits.

Firefox's memory usage is a bit less predictable (and can get crazy, especially when you leave badly coded js websites open for days in a row) and there are more situations when you say to yourself "I'm leaking memory, I'd better restart", but Chrome memory usage grows faster when you open more tabs.

Plus, Chrome UI is less suited to many tabs: favicons stop appearing beyond around 30 tabs (for me, it's probably machine-dependent), and the omnibar doesn't give you a way to switch to already open tabs by default.

Plus, Chrome autoupdates aren't exactly optimized for low-end configs.


Chrome is great at releasing memory (since it often is just killing a process) and it does tend to use less memory with few tabs.

But even from Chrome's birth, Firefox was already using less memory with lots of tabs.


It's a mixed bag.

Firefox uses far less memory per tab, but still chugs down more memory with time. Chrome uses metric shit-tons of memory, but its multi-threading allows individually killing tabs (or just massacring them in bulk if necessary, which it is).

When memory's tight, Firefox seems to bog down as a whole, while Chrome gets boggy on individual pages (and you can kill/reload these as needed).

I find myself using both though I'll fairly routinely go through and kill off Chrome tabs, and periodically restart Firefox, to keep memory management reasonable.

Firefox's tab and state management is far superior to Chrome. Chrome plays better with some advanced sites (notably Google's own webpages, surprise, surprise).


I've seen Chrome claim up to 512MB ram per tab on several occasions when you keep it running for a while. Chrome could be enough to use my (old) system's 8GB of RAM.

Now I have a 32GB system so it's less of an issue, but Chrome is still seriously a memory hog.


It's a very common observation that Chrome can't handle more than a few 10s of tabs.

In comparison, I know several people who regularly have more than 500 tabs open in Firefox -- 1200 is the highest number I've heard, though not all of these were loaded at once thanks to Firefox's "Don't load tabs until selected" option.


500 open tabs? Good lord, I don't have that many bookmarks!

I'm really curious, why would one do that?


I've had a little over a hundred open at one point... I go browsing on HN and reddit a lot. I click on a bunch of different links to open in new tabs, and once the tabs start overflowing, I move them to different tab groups, and eventually and forget about them. Since I usually close my browser when I am done browsing and reopen it when I want to browse again, it does not make much of a difference in terms of snappiness.


I've heard several people say they keep lots of tabs open like you do and I'm curious why.

Personally, I find I never need more than ~10 for the task at hand. Also, if I ever DO have more than that open, the titles get too small to read and I start forgetting what I have. I either go flipping through all of them to find one, or I just give up and open a duplicate tab, making the problem worse.

Instead, if I want to look at something later, I usually tag it in Pinboard. Later, I can scroll down through a readable list of things I've recently bookmarked, or if I vaguely remember it, I can search by tags or text to find it.

Given this, I can't see a reason to have so many tabs open. Can you explain why you like to work that way?


It's honestly out of laziness. I mostly just keep opening tabs and can't be arsed to go back through them and decide which ones to keep.

I've tried using all manner of sites like Pinboard, but none of them work as I like. I want a one-click bookmark button where I don't have to tag anything. Maybe I should finally get around to building the bookmarking app I really want.


I'm trying Pocket for a similar reason. It adds a little icon by the bookmark icon. There's no tagging or anything required, the things I save go into a private queue 'in the cloud', and it even syncs to my phone.


I can't remember why I stopped using Pocket. I really need to write down reasons why I stop using software/libraries etc; there always comes a time when I need to justify it and I can never remember the reason :D


I use http://historio.us/ for that. One-click bookmarklet, all my links are indexed so I can full-text search for them later.


>> I want a one-click bookmark button where I don't have to tag anything.

Pinboard has several different bookmarklets you can use. One of them is a single-click one that tags the page as "read later".


I do use Instapaper for that.

With some one-click extension.


Up to 100 tabs.

Because I can do it without any slowness.

I'm one of the two Opera users.


I have an explanation as well: Big parts of my work consists of exploratory "research" (what is written, what is available?)

Reading up on one alternative often leads to multiple links which I can either read one an one, return from, figure out what is the next link, follow that, maybe recurse another level then step back etc. To me this is cumbersome.

Instead of keeping this tree in my head I can just ctrl-click any interesting link, ctrl-tab to the next tab when I'm finished with the current one and continue ctrl-clicking.

Finishing up I can even use the save tab set feature of pinboard to stash it all away under a nice label.


I think that is a little petty. While I may be a vivid Mozilla supporter (and user of Firefox!), I did too get the strange feel of it being a bit of a 'whining'. At least the title gave that impression.

But reading the entry, it is actually rather good reasoning for why Mozilla won't switch to WebKit. I am not sure WebKit-dev really needs more companies telling them what to do. Or fight over what to be done.

While others may disagree, I consider XUL a compelling advantage over the alternatives, because I can modify my browser to my liking, without writing my own browser from scratch.

I am not trying to be conservative, but I like the way Firefox does things. Yes, there are issues, it is hardly perfect (I use Chromium to watch Flash and use Java at home, because I fear Firefox's heavy memory footprint[1]).

And Firefox's desktop share has remained rather stable the past year, along with Chrome's share.[2]

[1] Although, in fairness, I do usually have over 100 tabs open. [2] http://arstechnica.com/information-technology/2013/02/intern...


I think you missed an important point.

OP is not arguing people should be using Firefox or other Mozilla products to "fight monoculture". OP is arguing that in order to deliver browsers (and other products) with compelling advantages in a way that advances Mozilla's mission, Mozilla needs to maintain their own rendering engine(s).


What good is having an independent rendering engine if essentially every other aspect of Firefox as of late has been a copy of how Chrome or Safari handles things?

I'm talking about stuff like getting rid of the traditional menu bar and status bar, hiding the protocol in the URL input field, support for SPDY, the new tab page, silent updates, the built-in PDF viewer of the upcoming Firefox 19, and forth.

Meanwhile, we've also seen them spinning their wheels with failed me-too initiatives like Firefox for Mobile and Firefox OS, rather than producing any true innovation.

Ever since Firefox 4, all that Mozilla has managed to deliver is the Chrome experience, but in a less-effective manner. It makes perfect sense why people are leaving Firefox for Chrome; they'll get a nearly identical UI in Chrome, but they'll get new features sooner, and with better performance.


Err... speaking as someone who has two Firefox OS Developer Devices in front of him right now, I can tell you that it is indeed very innovative. And so is Firefox for Android (specially when using the Aurora channel).

What you seem to be completely ignoring is the proposal of a WebAPI standard that allows web applications to access hardware and OS features and how this enables a vendor-neutral app ecosystem that doesn't have to answer to Apple or Google. This is way more important than hiding the protocol in the URL.

Firefox for Desktop, Firefox for Android and Firefox OS are a combination that will soon allow you to have the freedom of the web (aka cross-platform apps that doesn't require permission from your vendor to exist) on all your devices.

This is not only innovation but this is fighting for a web that belongs to the users and not an ecosystem where the user is the product being sold.

Have you ever considered why Firefox for Desktop appears to evolve in a slower pace than Chrome or Safari? Its because of standards. Mozilla works in the open, heck, you can have access to all the steps of production of a Firefox feature and Mozilla strives to make things standard in the W3C or whatever standard organization deals with that feature while Safari and Chrome will often implement things and not care about interoperability. They can do this because they are the spear point of two companies, named Apple and Google that have their own objectives. As companies, they need to differentiate from the competition and thus need to evolve fast. The choice of evolving fast and differentiating alone or working together in cooperation between companies and committees is this gap you see in browser evolution. I'd rather have standard W3C backed WebAPI and Firefox than WebKit features that do not work on Gecko and Trident and whatever engine launches in the future.

Different from you, I see Mozilla as really innovative because fighting for users and a free standard web is an innovation in these days of vendor lock-in and "I have this feature, you don't".


> Have you ever considered why Firefox for Desktop appears to evolve in a slower pace than Chrome or Safari? Its because of standards.

and here's me thinking it was because Firefox and Gecko consist of 20M lines completely unmaintainable, crufty C++!


It's about 6M lines last I checked, of C++ that's not too bad.

How many lines is WebKit? And how crufty or maintainable? ;)


>Err... speaking as someone who has two Firefox OS Developer Devices in front of him right now, I can tell you that it is indeed very innovative

How is a me-too ChromevOS clone innovative?


ChromeOS isn't remotely the same thing as Firefox OS. "We’re aiming at mobile/tablet devices rather than a notebook form factor. This is an early-stage project to expose all device capabilities such that infrastructure like phone dialers can be built with Web APIs, and not only “high level” apps like word processors and presentation software. We will of course be happy to work with the Chrome OS team on standards activities, and indeed to share source code where appropriate." https://wiki.mozilla.org/B2G/FAQ


I use Firefox Nightly and the latest Chrome stable. You're just plain incorrect. The Firefox user experience is very different to Chrome. Sure there are similarities but it's ridiculous to claim it's just a copy. They've taken a completely different approach to their built-in PDF viewer and as a developer I can tell you there's a huge number of differences in the way that FF and Chrome handle networking and interface standards. And, yes, these are noticeable to my non-developer friends and family.

The reason so many non-techies are using Chrome is that Google has a much stronger brand presence than Mozilla and has, at least in the UK, spent a fortune on advertising.

I love both of these browsers for different reasons. Please don't claim they're the same just for the sake of making a point.


> as a developer I can tell you there's a huge number of differences in the way that FF and Chrome handle networking and interface standards

could you explain what these changes are and how they're noticeable to the general public?

I'm a developer who's written a Comet webapp you've likely used or at least seen, and I have yet to notice...


The big difference is that although Firefox is abiding by convention, all of these things are pretty easy to configure. I show my status bar, full protocol in the location bar, and so on. I can refresh with F5 or Command-R, things like that. I don't know how many times I've had to use Chrome and been frustrated with something as simple as F5 not working.

Observing the activity on interesting support tickets, I believe that Firefox designers/developers have generally been far more responsive to user requests than have the Chrome designers/developers.

I think those are the two reasons why Firefox is so important: It is a true community project, and it is a browser suitable for power users.


Yes you're right but what is also striking (at least to me) is that they seem not very confident with their choice to sticking with Gecko and as far as I can remember it dates back to the launch of Chrome and its new threading model. I don't say they have done a bad choice (I'm still using FF as my main browser) but they seem to doubt about themselves, that's not really reassuring.


Believe me, Brendan Eich has zero doubt about the value of sticking with Gecko. With this blog post he's just pre-emptively answering the zillion people saying "Opera switched to Webkit, why doesn't Mozilla as well?"


I'm pretty sure you're wrong about that. The last time we had a corporate controlled monoculture there really was little practical reason to switch away from Microsoft to Mozilla except the idealism of fighting against corporate control, in fact, there were plenty of practical reasons not to switch since a big chunk of the web at the time worked only on IE browsers. But significant people did switch, enough to make a difference.


I use Firefox because it is free software. It feels more honest than Chrome, IE or Opera.


I keep Firefox up-to-date because it's the only web browser on Linux that appears to support the PKCS#11/Client-SSL/CoolKey/etc. needed to support using smartcards for SSL-enabled authentication on websites.


Is Firefox more free than Chromium?


Some parts of Chromium lack a free license[1], but is is a lot more free than Chrome.

[1]http://en.wikipedia.org/wiki/Chromium_%28web_browser%29#Lice...


Then I suggest you read the article as he gives a very good explanation.


He gives a lot of political reasons for that, and I agree fully with them, but in the end the only thing that matters is the resulting product. Don't get me wrong, I am very thankful for what Mozilla did for the web at a time when IE was practically the only browser. And I also see that they continue innovate.

Mozilla can only survive if a sizable percentage of end users perceive Firefox as the best browser for them, and this usually means that Firefox needs to provide the best user experience. Personally I think that other browsers currently provide a better user experience, and I see a lot of people around me switching from FF to Chrome. The reason for that is not Webkit , but Firefox's user interface.


Funny, this is the same argumentation I see on open vs commercial software.

In the end only the product matters.


I don't think fighting a monoculture is a bad thing. Monoculture is bad in the long run, there are several implications to it mostly security wise.

So its not just monoculture. It's also Mozilla's wast ecosystem of customization and the good will it's projects generate.

Look at it this way - Did Chrome made/sponsor PDF reader in JS? Did it sponsor a Flash VM in JS? Did Google Chrome team made tools other than would help all vendors equally? Dart and NaCl are valiant efforts but they help Chrome first and other browsers second.


>I don't think fighting a monoculture is a bad thing.

he didn't imply it was bad, just that no one in this environment is going to switch to or stick with Firefox just to fight monoculture.


The lady doth protest too much, methinks.


For me the question for the larger web community boils down to this: we benefit as a whole when there are multiple high-quality browser engines but the implementors do not directly receive a significant share of those benefits (i.e. Opera implemented some things, helped the standards improve, and 99% of the web didn't benefit until the corresponding WebKit or Gecko implementation).

If we want that to change – particularly for ambitious projects like major security or parallelism improvements – it seems like there has to be some way to bring the scale back to the level where a small group can effectively enter the field.


I wonder if Mozilla plans on rewriting Firefox with Rust. Would that be too big of a task for Mozilla? They should also make the decision if they stick with Gecko or not at that point.


They are developing them in parallel. Since rust is still an experimental project (the language isn't even finalized) it wouldn't make sense to replace firefox with it _now_.

So there is no "at that point" decision. Firefox+Gecko codebase is here to stay for a few years.


Uh, have you heard of Servo? https://github.com/mozilla/servo It's that, as a research project.


As soon as I saw the headline that Opera was switching to WebKit, I thought, "I bet I know what announcement Mozilla will be making on April first..."


> [...] Don’t get me wrong. I am not here to criticize WebKit. [...]

Well, no. The whole "Thoughts on WebKit" section of the article spells out problems with WebKit that do not exist in Mozilla-land; most people would call that criticizing.

Which is perfectly fine (how would you write this article without critically comparing the different projects), but please be honest about it and spell it out.


I might not be at the party to drink the spiked punch, but that doesn't mean I won't partake a bit...


Mozilla -- and Brendan Eich specifically -- have held back the web as a platform for years.

Brendan Eich is myopically focused on the DOM and JavaScript monoculture -- rather than the underlying potential of the web: a standardized and completely open application platform.

I assume this is due to his own vested interests and personal investment as the author of JavaScript, but it has been incredibly damaging to the web as a whole. For Google to push anything forward (NaCL, PNaCL, dart), they need at least one other major browser vendor to join them. Apple and Microsoft have vertically integrated native application markets, and have no reason to further the development of the web. That leaves Mozilla.

If Mozilla genuinely cares about providing an open application platform (ala Firefox OS), they'll put Brendan Eich out to pasture and focus on moving past his stranglehold on JavaScript+HTML.


NaCL is tied to particular hardware.

PNaCL doesn't exist yet, so there is nothing to push.

Dart is a much more interesting question.

That said, is there really value in having two and only two languages available (and leaks when you try to use both together in nontrivial ways) compared to using JavaScript as a compilation target? I guess it depends on how good a fit Dart is for your use cases...


JavaScript as a compilation target is ineffecient for both runtime and development; it's a hack.

PNaCL -- and any other alternatives -- are in no small part incomplete because Eich has stated repeatedly in no uncertain terms that its JavaScript or nothing.

If you want to produce an alternative, there's no chance of it being accepted.

Imagine this sort of total language myopia in any other general purpose platform.


Yup, I did it all -- mwah-hah-hah! I made Apple and Microsoft, iOS WebKit minions at Apple especially, even SJ, do my bidding in resisting poor pretty PNaCl.

Get your tums out, pal. We're taking PNaCl down for good this year with http://asmjs.org/. Cross-browser.

/be


> Yup, I did it all -- mwah-hah-hah! I made Apple and Microsoft, iOS WebKit minions at Apple especially, even SJ, do my bidding in resisting poor pretty PNaCl.

No, you've simply held back the only other entity interested in making the web a viable app platform:

- MS and Apple, and Google's behavior are aligned with their corporate incentives.

- In theory, Mozilla's behavior ought to be aligned with the interests of the web at large, but in practice, Mozilla is aligned with you, and your behavior is aligned with your own personal interest in a web platform monoculture based on your technology.

Mozilla and Google, in concert, have the ability to make it a market necessity for Apple and Microsoft to follow suit. Google alone does not.

This makes you the lynchpin at an organization that in theory was created for the very purpose of advancing the interests of the web.

> Get your tums out, pal. We're taking PNaCl down for good this year with http://asmjs.org/. Cross-browser

It takes a True Believer to abandon decades of research, ignore repeated market successes (in the application platform space) that vastly surpass anything they've produced, and then continue to myopically push their own invention into areas where it simply does not belong.

The funny thing about asm.js is that it's an admission of the failings of standard JavaScript for this purpose, so much so that you have to define a strict subset, on top of which implementors will have to invest even more time and complex effort in providing quality implementations.

This is death by a thousand cuts.


"on top of which implementors will have to invest even more time and complex effort in providing quality implementations."

It is actually quite easy to implement asm.js; you just carve a "native mode" out of the VM infrastructure that already exists. Much easier than inventing a full-stack VM from scratch.


It'd be even easier with a clean bytecode, and it wouldn't require nearly as much effort and round-about solutions. It also would simplify portable tool development, including source-level debuggers.


And if we're living in a fantasy world, we can all get a pony, too. The fact remains that JavaScript is widely deployed and has multiple viable competing implementations. Any hypothetical bytecode format starts at a huge disadvantage due to this reality. The path of least resistance here is to target JavaScript.


The "path of least resistance" has not been particularly successful in helping the web provide a robust platform for application development over the past 10 years. I don't see why the hack-and-slash "pragmatism" should be expected to start working now.


Now you are just smoking crack. The web is hands-down the most widely used application platform in the world, and you're going to claim it's not a robust platform for application development? I don't know what planet you live on.


The web is the most widely used document platform in the world.

And no, it's not a robust application development platform. Working with it is an exercise in constant compromise between bad technologies and the quality of the user experience.

You're smoking something too, if you're equating the content-centric web with the breadth and depth of the market of native mobile and desktop apps.

Of course, I also get a better experience from the NYTimes mobile app; it's simply that the web can do content less badly than it can do apps.

I shudder in horror at writing one of our large apps in JavaScript, maintaining it, and desperately trying to keep frame rates up (yes, that does matter to more than just games), conform to some sane semblance of platform standards to which users are accustomed, reuse a platform widget toolkit, etc.

The myopia of the web crowd is why your platform continues to suck.


> The funny thing about asm.js is that it's an admission of the failings of standard JavaScript for this purpose, so much so that you have to define a strict subset

I'm surprised you find that approach controversial. Have you not seen the awesomesauce that Re2 and PyPy bring to the table by defining strict subsets of PCRE and Python, respectively?


> JavaScript as a compilation target is ineffecient for both runtime and development; it's a hack.

Honestly that's only an implementation dependent detail. If there is a fast compiled Lisp like SBCL, a fast JavaScript can certainly be done.


> Honestly that's only an implementation dependent detail.

Not really, no. JavaScript is a ridiculously high-level target for another language, and this introduces a huge amount of complexity in any effort to target it efficiently.

You can pare down JavaScript, as Eich seems to be banking on, but what possible technical reason is there for this? If what you care about is compatibility with Apple/MS browsers, then generate JavaScript from your intermediate bitcode/bytecode, and let those browsers be slow and complicated and have worse development tooling.


"If what you care about is compatibility with Apple/MS browsers, then generate JavaScript from your intermediate bitcode/bytecode"

Which is precisely what the Mozilla Emscripten project does.


Emscripten doesn't treat LLVM bitcode as a first-class target, it treats it as an intermediate target to be translated into JavaScript.

[high level language] -> [low level representation] -> [high level language] -> [low level representation] -> [execution]

This is ridiculous to aim for as your first-order target.


PNaCL is "incomplete" (as in, not working) because no one actually knows how to make it work, though they've been trying to figure it out for years. The fact that it's a hard problem has nothing to do with Brendan saying he doesn't like the idea.


> For Google to push anything forward (NaCL, PNaCL, dart), they need at least one other major browser vendor to join them. Apple and Microsoft have vertically integrated native application markets, and have no reason to further the development of the web. That leaves Mozilla.

Yes, but Mozilla still needs to be convinced that the new technology makes sense for the web. Mozilla collaborates with Google on lots of new things for the web (WebRTC, see recentl HN stories on interoperability between Chrome and Firefox there, Web Intents, etc. etc.), but it does disagree on NaCl for example.


Yes, but: Mozilla disagrees because Eich disagrees, and Eich disagrees because he's built his entire career on top of his invention of JavaScript.

Is there really anyone other than Eich -- producing ANY platform -- that thinks that JavaScript is the correct baseline virtual machine to target?

Breakdown of low-level targets:

Google: Native, NaCL, PNaCL, Dalvik

Apple: Native, LLVM (eg, for OpenGL feature emulation).

MS: Native, CLR

Oracle: JVM, Native

Mozilla: JavaScript


An alternate explanation that doesn't involve a fantasy world where Brendan Eich issues oppressive decrees from his throne of skulls:

Maybe Mozilla dislikes NaCL because there's actually a lot wrong with it?


> An alternate explanation that doesn't involve a fantasy world where Brendan Eich issues oppressive decrees from his throne of skulls ...

Eich is the CTO and sets the technical direction for one of the four major browsers. The browsers can only move forward standards in cooperative with one another.

This isn't a fantasy world.

> Maybe Mozilla dislikes NaCL because there's actually a lot wrong with it?

Then maybe they could propose a solution to the problem other than adding more JavaScript? Mozilla also dislikes standardized bytecode and virtual machines:

http://www.aminutewithbrendan.com/pages/20101122


What would a standardized bytecode bring to the table over asm.js?

To me the choice is between more convenient encoding format for the bytecode and backwards compatibility with all browsers, and going with the latter sounds eminently reasonable to me. The x86 bytecode encoding is really ugly too (look at those one-byte-long binary coded decimal instructions), but the survival characteristics of backwards compatibility are undisputed...


> What would a standardized bytecode bring to the table over asm.js?

A better technical solution and lower development friction throughout the entire stack. The hackiness of the web already introduces an enormous amount of cumulative friction, and asm.js just adds more.

People have been asking for a solution -- and working on them -- for 5+ years, and Eich has consistently replied that such a thing is unnecessary. In many respects asm.js is an admission of defeat, in that it's not standard JavaScript (it's a strictly designed subset), and they are attempting to tune it to the purpose of providing a standard bytecode format, while still being able to claim that it is JavaScript.

In the same period of time, entirely new and proprietary mobile platforms have emerged and eaten a huge portion of the application market's mindshare and marketshare.

Perhaps it's time to stop listening to someone who can't think outside his own box, and instead choose a technology that is actually well-suited to the problemspace.

> To me the choice is between more convenient encoding format for the bytecode and backwards compatibility with all browsers ...

That could be achieved by generating JS from the bytecode, which would turn the problem into a temporary one that disappears as browsers are updated, instead of yet another time and resource draining wart on the web platform.

Human readable JavaScript "byte code"? Seriously? This is the kind of backwards thinking that left the door wide open for native mobile apps to own the market.

I would love to be able to target the web instead of proprietary platforms, but the technology stack isn't there and won't be as long as people like Eich are running the show, and remain fixated on what worked for web documents as the solution for what will work the future of web apps.


"In many respects asm.js is an admission of defeat"

Going with a worse-is-better solution for the purposes of backwards compatibility is always an admission of defeat. But it's a very practical one. It's an admission of defeat that has made Intel and Microsoft billions, for example.

"while still being able to claim that it is JavaScript."

But it is JavaScript. The ECMA-262 spec says how to interpret it.

"That could be achieved by generating JS from the bytecode, which would turn the problem into a temporary one that disappears as browsers are updated, instead of yet another time and resource draining wart on the web platform."

And then Web developers have to ship two binaries. Foisting the problem of backwards compatibility onto Web developers for the sake of making a nicer bytecode parser doesn't seem like a win to me.


It's the users that pay the cost, instead of the developers. Mobile is winning the app war for a reason.


Do note that Mozilla is not a hive-mind, and people disagree with Brendan on a regular basis. I haven't heard of anyone within Mozilla who was actually interested in NaCL/PNaCL/dart.


As a said to another commenter, this is selective bias.

Do you think a systems/application developer, who believes in those ideas, would choose to work at, or be a part of, Mozilla, given Eich's clear and verbose positions that stand entirely apart from decades of success shipping consumer applications for desktop and now mobile devices?

As someone who writes consumer applications, I want a common application platform, but I'm not going to sacrifice my tooling, work quality, or user experience to contribute to a fundamentally flawed approach, just because it's "open".


You're reaching, no one outside of Google likes NaCl or Dart (and even many in Google don't care for it). It's not just Mozilla that's against it.

NaCl and Dart were both created in a Dart room without anyone else's input.


You're a web developer, yes? In my circles NaCL is looked at with interest because there's no way possibly for us to produce apps to the level of quality we do elsewhere -- and without a huge amount of pain -- while using the web's organically grown technology stack.

Google at least understands the flaws. Web developers seem to have their head in the sand while mobile may very well eat their lunch.


> Yes, but: Mozilla disagrees because Eich disagrees

Mozilla is not a dictatorship - it's a nonprofit open source project. Obviously Brenden is a pivotal figure but people have many opinions on many topics, just read the mozilla mailings lists (which are public).

On this topic, AFAICT the great majority agree with Brendan.


It's selective bias. I, like many other professional application and systems engineers who didn't originate in the web space, wouldn't participate in Mozilla, nor try to work there.

I'm just not interested in continuing to try fit the square peg of DOM/CSS/JS into the round hole being an application platform. It has been made clear from Brendan (for at least half a decade now, if memory serves) that this is what they're doing and will continue to do.

In the meantime, iOS and Android appeared from nowhere and turned the engineering departments of many companies -- most of which were previously focused solely on server+web -- on their head.

At the same time, Google can barely give Chrome Books away. This must tell you something about the efficacy of these strategies.


iOS and Android are successful because they offer a great selection of powerful APIs, not because of the particular binary representation they use for applications.

NaCl actually offers crappier APIs than the Web platform, and it runs in a box where it can't directly manipulate the real Web APIs. asm.js is intriguing because it offers a very natural path to a foreign function interface to all the stuff exposed to JS.


> iOS and Android are successful because they offer a great selection of powerful APIs, not because of the particular binary representation they use for applications.

They also provide great battery life and user-visible performance (iOS especially), have incredibly well integrated development tools (see Apple's Instruments and its power, CPU, syscall, et al profiling), and give software authors the escape hatches they need to maximize performance when absolutely necessary.

It's not just a question of nice APIs. Layering another level of JavaScript spit and bubblegum on top of the problem is t going to make any of the above easier.


Question for skatepark: If PNaCl were finished now and supported by every major browser, what would you do with it? Would you use it to supplement the existing web stack for performance-critical code? Or would you write whole applications in some other language (if so, which one?) and target the browser via PNaCl?


We'd also need a full widget toolkit, set of foundation/standard libraries, and all the other functionality we take for granted on other platforms.

With all that in place, then I'd use whatever reasonable (JS isn't) language existed as a norm on the platform to write applications to target the browser, instead of Android, iOS, and Max OS X.

My only allegiances are to user experience and the quality tooling necessary to ensure it. The web as-is provides neither.


> My only allegiances are to user experience and the quality tooling necessary to ensure it. The web as-is provides neither.

You assert this, but you haven't provided evidence. Web application developers are providing great user experiences. What tools do you need to provide a good user experience that the web as it is now doesn't have?


> You assert this, but you haven't provided evidence. Web application developers are providing great user experiences.

Are you serious, or are you really just that out of touch with how we work on desktop and mobile apps?

> What tools do you need to provide a good user experience that the web as it is now doesn't have?

- Performance, performance, performance, performance.

- Performance.

- The ability to use the right language for the job. The right language isn't the right language if the performance is pot, so no, Emscripten isn't a solution. I'm talking about everything from exposing SIMD intrinsics when writing time-critical software to languages that actually support compiler-checked type-safety.

- Performance.

- Common widget toolkits providing a common user-experience across applications, from which users can learn platform conventions and be immediately comfortable and familiar with an application. These toolkits allow us to reinventing the wheel every single time. No, bubblegum and spit collections of JavaScript and CSS are not the same thing.

- Standard library which provides the functionality necessary to conform with platform exceptions and integrate with the platform.

- Tools. Debuggers, compilers, and most especially, profilers and deep instrumentation.

- Platform integration. This isn't just "cameras". It's also the iTunes/Media query APIs, in-app purchase, airplay, spotlight plugins, quickview plugins, menu items, and the mountain of other things that platform vendors implement to provide an integrated and coherent experience. Platform vendors push forward the user experience by providing richer platforms. Web vendors don't.

- Unification of code, UI, and styling. The DOM has to go, as does CSS and JS as separate entities. It's a ridiculous model for UI development and it makes producing genuinely re-usable and interoperable component toolkits very difficult.

I could probably go on all day. I WANT a non-proprietary, open, standardized application platform, but I need a platform that lets me provide the best available experience to end-users, and the web isn't it. I'm writing my software for my customers, and choosing technology over user-experience doesn't do my customers any favors.


I will now relent of defending Mozilla, in which I have zero vested interest. Mozilla's mission and approach seem good on paper (or in HTML), but you make good points.

It seems to me, on reflection, that I have been playing the role of religious apologist, while you have been the skeptic. This is ironic, because I have been in the opposite sitaution with regard to my former real religion for the past couple of years.

It seems to me that Android is the closest thing we have to a non-proprietary application platform, and even that is more tightly controlled by one company than we might like.

For now, I guess the best approach to writing great apps is to write the UI-independent core in a reasonable cross-platform language, and then use each platform's native UI constructs. iOS makes this difficult for any language higher-level than Objective-C, since there can be no JIT compilation for iOS. But I'm thinking now that C# might be a reasonable cross-platform language, given Xamarin's work on Mono for mobile platforms.


> Unification of code, UI, and styling. The DOM has to go, as does CSS and JS as separate entities. It's a ridiculous model for UI development and it makes producing genuinely re-usable and interoperable component toolkits very difficult.

How does separating UI styling from UI construction code make creating reusable components more tricky exactly ? Other points might be close-ish to the mark but this one seems way off.


To address a couple of things that you said more directly:

> Are you serious, or are you really just that out of touch with how we work on desktop and mobile apps?

Yes, I have been serious, and sincere, throughout this dialogue. I guess I really am that out of touch with how truly excellent desktop and mobile apps are developed. I have done most of my work in dynamically typed languages (Python, Lua, and JavaScript), with little regard for performance. When developing any UI more complex than a yes/no dialog on Windows, I have generally reached for HTML (embedded MSHTML to be specific), despite its shortcomings. On Mac and iOS, I have been lucky in that I can use Lua while still having a native UI and platform integration.

It seems to me that while mainstream desktop and mobile platforms have been mediocre in their various ways, none has forced mediocrity upon the application developer as much as the Web platform.

> - The ability to use the right language for the job. The right language isn't the right language if the performance is pot, so no, Emscripten isn't a solution. I'm talking about everything from exposing SIMD intrinsics when writing time-critical software to languages that actually support compiler-checked type-safety.

When I was defending Mozilla and the Web platform, my response (in true quasi-religious fashion) would have been to keep having faith in the almighty tracing JIT compiler. But I was reading yesterday about tracing JIT compilers, and I noticed that they need guards around loads and stores in case the inputs to a particular invocation of a trace are of different types than the types for which the trace was compiled. You're right to point out that compile-time type safety has some performance benefit. For truly performance-critical code, asm.js does provide for static type checking and AOT compilation. Still, we have yet to see how many browser makers will implement these things.

> - Common widget toolkits providing a common user-experience across applications, from which users can learn platform conventions and be immediately comfortable and familiar with an application. These toolkits allow us to reinventing the wheel every single time. No, bubblegum and spit collections of JavaScript and CSS are not the same thing.

Yes! Yes! I most emphatically agree with this point. It seems to me that modern web application UIs are still a free-for-all, with many app and toolkit developers defining their own widgets as they see fit. In contrast to the Apple platforms, GNOME, and even Windows IIRC, there's no set of human interface guidelines for the Web platform.

> - Platform integration. This isn't just "cameras". It's also the iTunes/Media query APIs, in-app purchase, airplay, spotlight plugins, quickview plugins, menu items, and the mountain of other things that platform vendors implement to provide an integrated and coherent experience. Platform vendors push forward the user experience by providing richer platforms. Web vendors don't.

Mozilla is introducing new APIs to expose more platform features, at least for Firefox OS and Firefox for Android. But that does little good for today's mobile application developers. And I guess it's a distinctive behavior of any "true believer" to have faith that promises will be fulfilled at some indeterminate future time, and encourage others to do the same.

> - Unification of code, UI, and styling. The DOM has to go, as does CSS and JS as separate entities. It's a ridiculous model for UI development and it makes producing genuinely re-usable and interoperable component toolkits very difficult.

I'm not quite convinced on this one. Reusable component toolkits do exist for the Web platform, so presumably most application developers don't have to do this very difficult work. Can you elaborate some more on what's wrong with the HTML/CSS/JS trio, or point me at an existing critique that you think is on target?


I think I have identified a few potentially faulty assumptions underlying your comments.

You assume that the only reason that Mozilla is the only major player to push JavaScript as the way forward is that Brendan invented JS. Another interpretation which is less cynical toward Mozilla is that the other players are most interested in their own platforms, whereas Mozilla is most interested in the Web as a whole. A JavaScript runtime is the one runtime that all browsers have, so rather than fragment the landscape with a second runtime, Mozilla is pushing JS as the way forward.

You assume, as if it were an axiom, that compiling other language to JavaScript is less efficient than compiling to LLVM bitcode and/or x86/ARM native code. I'll take these separately.

Targeting JavaScript versus LLVM bitcode: For good performance, either of these will be JIT-compiled to native code. To claim that LLVM bitcode is a better target, you need to show that some JIT compilation technique implemented by LLVM/PNaCl is made impossible by JavaScript the language. JS typed arrays provide a C-friendly memory model; I don't know of anything else that's missing. I'd be happy to be educated though.

Targeting JavaScript versus native code: Last I checked, Google is not going to push NaCl for standard web apps until PNaCl is ready. So we'll need the cross-platform target and JIT compilation anyway. You may argue that even when PNaCl is ready, application developers will also ship x86 and/or ARM builds for maximal efficiency. But let's dig a little deeper: in all likelihood, the PNaCl, x86, and ARM builds are all generated by a compiler from a single intermediate representation. In principle, why couldn't a subset of JavaScript be compiled to equally efficient native code? Even assuming that offline ahead-of-time compilation yields more efficient code than JIT compilation, a browser could observe which apps the user uses most, and apply more aggressive offline compilation to those apps. So it is by no means imperative that application developers ship native code.

You also assume that Mozilla's insistence on the DOM and other existing standards, rather than PNaCl + Pepper, is unequivocally holding back the Web as an application platform. Let's drill down into specifics. What features do NaCl and Pepper provide that aren't (yet) covered by standardized APIs? My understanding was that Canvas and WebGL are helping a lot.

Finally, you may be unaware of some limitations in NaCl. Specifically, because of the way NaCl validates code, you can't run a JIT compiler on top of NaCl. So, if you thought that NaCl would be a better target for Java/Python/Ruby/pick your favorite than JS, think again. For good performance, you'd need something to compile your source language to something else anyway, either JS or LLVM bitcode. Might as well be JS.

I think I have demonstrated that Mozilla's insistence on JS and other standards is not holding back the Web as an application platform by any means. Indeed, none of the other major players you've mentioned, not even Google, is as serious about the Web -- the open Web -- as Mozilla.


> I think I have demonstrated that Mozilla's insistence on JS and other standards is not holding back the Web as an application platform by any means.

I don't really believe that you have. As simple aggregate counter-evidence, I will point to the popularity, performance, consumer, and developer adoption of mobile platforms.

More specifically:

> To claim that LLVM bitcode is a better target, you need to show that some JIT compilation technique implemented by LLVM/PNaCl is made impossible by JavaScript the language. JS typed arrays provide a C-friendly memory model; I don't know of anything else that's missing. I'd be happy to be educated though.

In the earlier days of Java, a common refrain was that the JVM could get better-than-native performance because it could make runtime decisions about JIT optimization. In reality, the complexity of performing this task generally outweighs the benefits compared to native code, and the JVM hasn't really succeeded outside of specific micro-optimizations. As a whole, the JVM is still slower than most native code.

Your argument seems to be predicated on a similar fallacy. The notion that since you can, in _theory_, optimize JS down to an efficient byte-code (or even AOT compile it), it is as good as an efficient and well-defined byte-code to begin with.

Yet, we have numerous historical examples of how adding more complexity between you and your intended target (eg, performance, versatility) doesn't help you achieve it. JS is an expensive and complicated intermediate.

This is equivalent to selling something at a loss and claiming you'll make it up in volume. You're adding complexity and challenge to an already challenging problem.

> You also assume that Mozilla's insistence on the DOM and other existing standards, rather than PNaCl + Pepper ...

Actually, NaCL and Pepper have nothing to do with the DOM argument. The DOM argument is just an example of where rigorious adherence to what worked for documents is not working for applications.

> Finally, you may be unaware of some limitations in NaCl. Specifically, because of the way NaCl validates code, you can't run a JIT compiler on top of NaCl. So, if you thought that NaCl would be a better target for Java/Python/Ruby/pick your favorite than JS, think again.

You are aware that Mono AOT compiles C#, correct? As does RubyMotion. JIT is not strictly necessary here.


> As simple aggregate counter-evidence, I will point to the popularity, performance, consumer, and developer adoption of mobile platforms.

This only implies that native mobile applications are more popular than mobile Web applications; it doesn't imply any particular reason why. At best, NaCl was still in the research phase when native mobile apps started to catch on, so Mozilla's refusal to get behind NaCl is irrelevant to the popularity of native mobile apps.

> In the earlier days of Java, a common refrain was that the JVM could get better-than-native performance because it could make runtime decisions about JIT optimization. In reality, the complexity of performing this task generally outweighs the benefits compared to native code, and the JVM hasn't really succeeded outside of specific micro-optimizations. As a whole, the JVM is still slower than most native code.

Are you arguing that for optimal performance, applications will always need to be distributed as native machine code? I will be disappointed if it turns out that they do, because that's how we get locked into proprietary platforms, at the CPU level if not at the OS level.

> You are aware that Mono AOT compiles C#, correct? As does RubyMotion. JIT is not strictly necessary here.

Yes, I was aware. Perhaps I overstated the importance of JIT compilation. However, I think we need more data. Some comparative benchmarks between RubyMotion (in the simulator) and a JIT-compiled Ruby (on the same Mac) would be relevant.


Good points, especially the first one.

(Or did I, from my Throne of Skulls, send my T-1000 back in time to stop NaCl from being used by Apple on the first iPhone? Mwa-hah-hah!)

On Skatepark's planet it seems (a) Android (definitely a mobile platform, now the biggest one by users) has native-speed Dalvik apps; and (b) most developers write those in preference to hybrid or web-only apps.

Not so on Earth. Regarding (a), SpiderMonkey (and I'm pretty sure V8) beat Dalvik on same hardware on standard benchmarks. On (b), when last I checked over 70% of Google Play apps used embedded WebViews.

Only by reading "mobile" as "iOS" do the commenter from another planet's performance assertions even half-way hold up. The historical cause/effect claims just don't make sense (unless I do have backward time travel).

/be


"In the earlier days of Java, a common refrain was that the JVM could get better-than-native performance because it could make runtime decisions about JIT optimization. In reality, the complexity of performing this task generally outweighs the benefits compared to native code, and the JVM hasn't really succeeded outside of specific micro-optimizations. As a whole, the JVM is still slower than most native code."

The JVM is a much higher-level bytecode than asm.js; asm.js is essentially just a JS encoding of LLVM. To name just one example, the JVM's memory model is composed of garbage-collected objects, while asm.js' memory model is a heap and stack as a linear array of bytes. There's a world of difference between the JVM intermediate language and the LLVM/asm.js intermediate language.

The idea that the LLVM intermediate language cannot be compiled into something as efficient as native code is trivially disproven by the fact that this is how clang works: it compiles to LLVM IR and then to native code.

"You are aware that Mono AOT compiles C#, correct? As does RubyMotion. JIT is not strictly necessary here."

Polymorphic inline caching, essential for performance of dynamic languages, requires self-modifying code.


> asm.js is essentially just a JS encoding of LLVM

This seems like a bit of a stretch; how do you encode an indirect branch in JS?

But it looks like an interesting project and I look forward to seeing what they are able to accomplish. I think it may be difficult to get native code performance while also being a secure enough sandbox to run in-process with the browser. LLVM is fast in large part because does not have the requirement of being a sandboxed attack surface. Making it safe will require performance compromises, like guards on loads and stores. Even (P)NaCl, despite its sandboxing, runs in a separate process, limiting the damage if the sandbox is broken out of. JavaScript has traditionally taken the approach of achieving safety by exposing only high-level concepts (objects, attributes, etc); if it aims to support lower-level programming idioms with low overhead that may be at odds with also being highly safe.

It's an interesting design space and I will be curious to watch the project evolve.


"This seems like a bit of a stretch; how do you encode an indirect branch in JS?"

asm.js has a switch instruction to cover most uses of indirect branches. First-class function pointers are being worked on for a future version of the spec. So this should cover the vast majority of use cases of indirect branching. If this is not enough, I could imagine an extension of JavaScript to support e.g. first-class labels. (Note that LLVM did not support first-class labels for a long time, and of course portable C does not.)

"Making it safe will require performance compromises, like guards on loads and stores."

Naturally, but this is also the case with NaCl or PNaCl, unless NaCl drops all of its software fault isolation techniques and starts relying solely on the OS sandbox. I suppose in theory one could compile asm.js in this way if it was really desired. The entry and exit points from asm.js to the DOM are fairly well defined, so in theory one could IPC them as well.

I'm personally skeptical that asm.js adds much more security risk over the JavaScript JIT that already must exist, however; the semantics of asm.js are extremely simple even compared to LLVM (as asm.js is untyped), much less the full semantics of JavaScript.


If it delivers on its promise (native-ish code speeds without imposed GC overhead), I'll be the first in line to use it! :)


> In the earlier days of Java, a common refrain was that the JVM could get better-than-native performance because it could make runtime decisions about JIT optimization

In reality, that goal was achieved ;-)

> You are aware that Mono AOT compiles C#, correct?

Java can be compiled straight to native code too. GCJ works like that for instance. The resulting performance from binaries of such compilers is abysmal though.

C# is not really a good example because C# was designed for AOT compilation. For example, most methods in the standard library are not polymorphic and methods in C# are final by default, whereas with Java it's the complete opposite, with methods being virtual by default. With all respect to Mono and Xamarin, it's hardly an epitome of performance.

Also, right now JRuby trumps in performance just about every other Ruby implementation available, including MRI, RubyMotion, Rubinius or IronRuby.

> JIT is not strictly necessary here

For dynamic languages, if you want decent performance, all the known techniques require JIT techniques. One of the most important optimizations the JVM does is inlining of virtual method calls at runtime, something which is impossible to do ahead of time. Another important technique is tracing JIT compilation, used successfully in recent Javascript engines (except V8 I think) and by Lua [1]

All known techniques for optimizing dynamic languages or runtime-polymorphic method dispatch require recompilation of code based on changing conditions.

[1] http://en.wikipedia.org/wiki/Trace_tree


> In reality, that goal was achieved ;-)

This ... has not been my experience. And I write a lot of high-performance native and Java code.

> For dynamic languages, if you want decent performance, all the known techniques require JIT techniques.

Then adopt a bytecode standard so that it can be JIT'd. I don't believe that NaCL is the end of the conversation, just that Mozilla has consistently prevented the conversation from starting, and refused to participate in it unless it involves Eich's JavaScript as the baseline implementation language.


So, you want a standard program representation that's amenable to JIT compilation. But why, exactly, does it need to be a bytecode format, rather than source code in a standardized language?

Let me reiterate some of the advantages of JavaScript:

* A performance arms race among JavaScript implementers has been underway for nearly half a decade now. JS thus has a head start in that area over any hypothetical language or bytecode format that might be integrated in a browser.

* JS already has first-class access to the DOM -- the API that all the browsers already have -- as well as all the other up-and-coming APIs.

* There are already JS debuggers, and with source maps, these debuggers can even be made to work with other languages that can be compiled to JS. All JS developers benefit from this, as well as all developers who use languages that compile to JS.

So basically, JS has a head start over NaCl, Dart, or any other potential "clean" replacement.

As far as I can tell, your only objection then is that there's something aesthetically wrong with using a quirky high-level language as a compilation target for other languages. To that I can only say that worse is better. We might as well get on with the business of writing great apps with the tools we have.


IE does MS bidding. Chrome does Google's bidding.

So, I hope Firefox stays and I hope that the $1 billion Google gave them doesn't change their priorities.


Just because Google gave them $1 billion doesn't mean it impacts their priorities (see: Apple and the rumored billions Google gives them for being the iOS search engine placement). It also matters what other offers are on the table. I bet Microsoft would be willing to pay plenty for Bing placement in Firefox. Both Mozilla and Microsoft clearly considered the possibility seriously (remember Firefox with Bing?).

Also remember that, initially, Firefox's Google contract expired without being renewed. Instead, the renewal came a few weeks later. At the time there was plenty of speculation that Google was letting Mozilla twist in the wind. Considering how much Google came back with, it certainly looks like any twisting went in the other direction.


That's why I said I hope, money does have a way to corrupt things. Sometime it starts as a noble thing: "let's do this one thing because it's a necessary evil..."

(see: Apple and the rumored billions Google gives them for being the iOS search engine placement)

Yeah but Google is trying, hoping and praying to destroy Apple's 50-Billion-in-profit business.


I disagree with your last statement. The most convincing explanation I've seen of Google's motivations with Android is that Android is a hedge in case Google is locked out of other platforms. They make money from people using Google services, regardless of whether the services are used from an iPhone, iPad, or Android device. Google and Apple certainly compete in many areas, but Google isn't trying to "destroy" Apple's business, nor is that even realistic.


I'm pretty sure we don't get anywhere near a billion dollars from Google. I haven't paid attention to our financials in a while, but the most recent public financial documents from 2011 say "Mozilla’s consolidated reported revenue (Mozilla Foundation and all subsidiaries) for 2011 was $163M (US), up approximately 33 percent from $123M in 2010."

http://www.mozilla.org/en-US/foundation/annualreport/2011/fa...


I thought it was $300million something a year, for three years. http://www.zdnet.com/blog/btl/google-paying-mozilla-300-mill...

Personally I was not happy since it makes it harder for any competing search engine and we sure do need competition.


A billion dollars over three years is a bit different. That may be the number, I have no extra information here. It may also be made up, I don't really see any useful sources.


Clearly Mozilla matters to Brendan Eich because without it he would struggle to find somewhere to meet his atleast $500k/year[1] pay package?

Mozilla's board are accountable only to themselves. It's a personal playground and cash machine for Baker and Eich.

[1] http://thetruthaboutmozilla.wordpress.com/2007/12/22/analyzi...


I know, don't feed the troll, _ad hominem_ arguments are fallacies on their face, and all that.

But so that there is no confusion on the part of readers who haven't been around the block, or around the valley, and therefore might be taken in by your libels, here's a personal testament:

You (byefruit) may be young, and/or not in my Silicon Valley job market. Look at my peers of around the same age as me. Many are independently wealthy. Those at executive or C-level (Chief of...) positions make similar salaries, not counting stock options from public companies which generally remunerate them to a much greater degree.

I'm not looking for a new job, but I get head-hunter contacts often enough. From the non-startups, the promised packages are as good or better in terms of salary. Doing a startup would mean a reduction in exchange for more founders' equity.

I have had invitations and targeted recruiting pitches from Google starting in 2002 and running up to this year. Had I gone in 2002, I would be set for life. I'm not independently wealthy, in spite of a decent salary for someone at my age and level.

Getting Firefox out, restarting JS standardization, helping build Mozilla, competing with Chrome, starting Mozilla Research (http://www.mozilla.org/research/), launching Firefox OS -- these add up to hard work, as hard as I've ever done save for the crazy Netscape days in 1995-6. This is especially true lately with lots of travel, which is not easy on my wife and small children.

Meanwhile, open source means living in a fishbowl, and Mozilla with its mission and people-first social movement roots means finding a business structure that doesn't loot our brand or screw volunteers by trying to cash out via an IPO or acquisition.

The structure that we chose was a U.S. 501(c)(3) Foundation, with IRS approval up front, but then with IRS reneging after a couple of years, followed by a fight to deny us our status and tax our early revenue that had been declared tax-free at the time by that same agency.

In the course of this "package audit", I had the joy of a personal IRS audit, which I passed cleanly (I defended myself). And Mozilla prevailed on the question of its public benefit status.

I don't live in Atherton or Palo Alto. I don't drive a fancy car. I'm in a house built in 1960 that was enlarged in the '70s but is modest by any standard (and very messy right now!).

So, do you really think I am just in this for the money?

This is not a boast, it doesn't make me great or even good. Like most who have some years of sustained work and credit/blame on their backs, I am little by myself. I'm much less than some truly great people who work for a good cause for no more, and often a lot less, than I make.

I am 100% sure that I could not persuade most of Mozilla's top talent of anything if I were merely an empty suit adding no value and sucking a big salary.

So it is just galling to hear your spin on thetruthaboutmozilla, which by the way seemed in that 2007 piece to say that I was worth every penny (not that I'm giving it any credence :-|).

If I'm right that Mozilla matters, then it matters no matter what I make or where I go. If I'm valuable to Mozilla, then my staying will help. I have to consider that too, in negotiating compensation.

"Know thyself" is as important now as it was at Delphi in ancient Greece. I'm not at Mozilla to make money. I'm here to make the web better for everyone, users, developers, your parents -- without having to face business-agenda or personal-financial conflicts of interest.

Aristotle and Jesus agreed on this point regarding money (Mammon) vs. "the good" (God): you can't serve two masters.

So, enough about me. Whom or what do you serve, and why?

/be


I serve False Humility!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: