I have to say though, with the introduction of the GGC, and all of the improvements to the JS runtime recently in Firefox, that it has gotten much faster.
Electrolysis should bring "smoothness" to the entire UI as one site should not be able to halt the UI thread for the rest of the browser any more.
I tried it myself. Most of the bugs that are there explained in that wiki page aren't really occurring (I guess they were fixed in the meantime), but I noticed creating and destroying new tabs was much much slower, and that made me disable it.
There are certainly some performance things to iron out. Right now, we're just trying to make the base browser functions work properly, and then we'll start tackling the performance problems.
I'm looking forward to Electrolysis, since having only one process is what makes me use Chrome: the renderer hangs a lot, especially with complex pages (such as TweetDeck for me), and when that happens, the whole browser hangs.
WARNING! If you run Firefox Nightly to test Electrolysis (e10s), you should not share your e10s-enabled user profile with other Firefox release channels: Aurora, Beta, or Release! The Nightly channel has many e10s bug fixes that are not fixed in Aurora, Beta, or Release.
I'm really glad Mozilla has been working on this. A lot of comments here talk about UI responsiveness being the key benefit, but another huge win is that it limits the damage a browser zero-day can do. With this model, a compromised tab will no longer be able to directly read the contents of other tabs, since the other tabs live in separate address spaces.
This of course assumes a compromised tab can't go on to compromise the browser kernel (i.e. the process that manages tabs and shared tab state) or trick the kernel into giving it unauthorized data from other tabs. However, formally verifying that a kernel implementation prevents this is feasible in practice [1].
> With this model, a compromised tab will no longer be able to directly read the contents of other tabs, since the other tabs live in separate address spaces.
... assuming those tabs are cross-origin.
> This of course assumes a compromised tab can't go on to compromise the browser kernel
Also assumes that a compromised tab can't go on to compromise the OS kernel.
Not sure why this has to be a condition. Perhaps tabs loading pages from the same origin will both have read/write access to some shared data in the browser kernel (like the site's cookies), but they still run in separate address spaces regardless. A compromised tab won't be able to directly access another tab's RAM, as is the case with single-process browsers.
> Also assumes that a compromised tab can't go on to compromise the OS kernel.
Very true. However, my argument was that a multi-process model limits (but obviously does not eliminate) the impact of zero-days. In the single process model, the attacker could compromise any tab and have all tab state available with no additional effort. In the multi-process model, the attacker would have to compromise the right tab, compromise a different tab and trick the browser kernel to performing the requisite operations, or somehow bypass the OS's memory protection. Each of these require more work than before.
Sure, tabs can load static assets from a read-only cache instead of re-fetching everything from the origin (be it an in-browser cache run by the browser kernel, or a Web cache somewhere between you and the origin). But surely, the tabs run private instances of the layout engine and javascript VM when they process the page, no? When would it ever make sense for multiple tabs to directly access each other's runtime state?
I'm not sure that your example prevents the implementation of per-tab javascript VMs. Wouldn't the browser be designed such that its kernel mediated all tab-to-tab runtime state queries? Then, tabs wouldn't directly access each other's state, and the kernel would interpose security policies (such as same-origin-only requests) between tabs regardless of a tab's behavior. To keep this arrangement transparent to programmers, the tab's javascript VM would map access to shared state (like a window's opener) onto the appropriate call to the kernel, obviating the need for direct inter-tab memory access while preserving compatibility.
Layout and compositing need to be cross-origin; the Web implicitly allows cross-origin resource loads; and (as was raised above by pcwalton) browsing contexts like windows and frames need to function as if they're running on the same thread and can potentially be shared. All of that is to say that all of the potentially shared state makes it extremely difficult to bind a web origin to a process for security purposes. It's a hard enough problem that a team of Chrome engineers has been working on it for a few years, and are only now approaching an alpha implementation.* And that's many years after Chrome was entirely multi-process with sandboxed web renderer processes.
That would break the Web, because you would introduce data races into JavaScript. (Think about the granularity of synchronization on the objects you would need.)
I was wondering about the benefits of this. Supposedly having tabs in different processes should improve performance of the browser in general, as slow running scripts in one tab process won't slow down the main UI process. I have come across this problem before, including the whole browser freezing from some slow JS. But does this need to be done with separate processes? Surely running tabs in a different thread to the UI should solve this issue too?
I'm a bit concerned about the effect this could have on RAM usage and plugin compatibility - the linked wiki page already lists two plugins that I can't live without (NoScript and Tree Style Tabs) as being incompatible. Hopefully that will be fixed before this becomes default in a release.
Part of it is probably that browsers use so much shared global data that it is easier to just make another process. There are also fault isolation and security benefits you get from splitting the web content into a process that can have a tighter sandbox policy.
I got the option to enable this the last time my Nightly installation was updated. It's been working well so far, but plugins are leaving a lot to be desired. On OS X Yosemite, full screen flash video will make the entire browser crash with the tab. On OS X Mavericks, it looks like the native Firefox PDF viewer is no longer available, and the adobe plugin was not working at all in its place.
I'm excited about e10s though, I think this is definitely the right way to go for all browsers!
"Electrolysis" finally works? Several years ago, Mozilla gave up on it. This is progress.
Now the question is how tightly locked down the page processes are. Can they access files? Or is all file and persistent state access via some separate process? Done right, each page process should execute in a jail, lacking the privileges to alter external files or persistent state.
This is a main reason I don't use Chrome. Since each tab is running under a sep process, there is no easy way to tell how much total memory is being used. Despite all the improvements, Firefox still has some pretty bad memory leaks and often runs over a gig of memory with just 5 or 6 tabs open. If they go this route, mem leaks will be harder to spot.
>This is a main reason I don't use Chrome. Since each tab is running under a sep process, there is no easy way to tell how much total memory is being used.
Seeing "how much memory is being used" is a major use case for you using a browser?
If a browsers works fast and doesn't leak memory, one shouldn't care how much memory exactly is used.
> If a browsers works fast and doesn't leak memory, one shouldn't care how much memory exactly is used.
That's a big assumption. Realistically speaking, there's only FF, Safari, and Chrome. FF leaks like crazy (sometimes on it's own but sometimes it could be a plugin). Safari doesn't support all the plugins I use and Chrome may but I'm don't like their dev console layout. And if you don't see how much mem is being used, you have no way of telling if a newly installed plugin just caused a memory leak (unless it just totally prevents you from using the browser).
> Seeing "how much memory is being used" is a major use case for you using a browser?
Yes, certainly. My development machine has 8 GB of RAM. On a number of occasions I've recovered from a marginal memory problem/commencement of disk swap by dumping Chrome. I should have more RAM, but still.
Chrome uses a lot of memory. Unless you have a ridiculous amount of RAM, there will be times when you'll notice the load.
> If a browsers works fast and doesn't leak memory, one shouldn't care how much memory exactly is used.
There is an undocumented about:config pref "browser.ctrlTab.previews" that enables a tab preview switcher. The switcher works with e10s, but not the tab previews. You can watch e10s bug 863512 for more info:
After versioning, functionality and UI they now copy sandbox-mode from the Chromium project. I don't think there's much time left before you won't be able to make the distinction between Firefox and Chromium anymore.
Is this really different than anything else in "pop programming culture?" Consider the spread of various paradigm practices in the different runtimes.
And this is not a condemnation of the practice. Just saying it really isn't that different than most anything else.
I will say that it is not that troubling. Far more troubling would be either a) multiple completely inconsistent browsers dominating the field or b) a homogenous ecosystem in the web. At least, I think so.
From what I've heard from both Google and MS people, Chrome had it first, it's just IE announced it (and shipped it in a beta) first. Alternatively, if you don't want to trust word of mouth — there's no way the Chrome team did their entire sandboxing implementation in the five months between IE announcing it and the announcement of Chrome, it's just too much work.
Yay, now firefox can waste just as much memory as Chrome. Guess 1GB for the whole browser wasn't enough; they had to go for ~100MB/tab.
Got to keep up with Chrome after all.</s>
If I use chrome for normal browsing (I don't normally because its feature set is poor, customisability is a dirty words, and I don't like the privacy risk), it can rapidly slurp 4GB or more of memory (with perhaps two key addons, adblock and DoNotTrackMe). Even firefox at its worst rarely if ever goes over 1.5GB, even with very heavy customisation and plenty of privacy/adblocking/etc addons.
>Performance would improve because the browser UI would not be affected by poor performance of content code (be it layout or JavaScript).
What about by the overhead from task switching hundreds of processes and mapping all the memory when each one takes up 50MB or more?
Multiprocess Firefox currently only uses one process for all tabs (plus a browser UI process). The tab process will be sandboxed, so users will get some of the security benefits of multiple tabs without the memory overhead of tab-per-process.
If you want to experiment with more than one tab process, you can tweak the about:config pref "dom.ipc.processCount".
Whichever one it is for chrome I have, it isn't ABP, it's a generic named Adblock. I don't use ABP due to the developer accepting money to whitelist google adverts.
AdBlock is significantly worst than ABP memory usage-wise (edit: on Chromium, I didn't benchmark elsewhere). I believe this is also the case CPU-wise from observing Task Manager during benchmarks.
I have 16GB of ram in my machine, I'd rather use 4 of that for a responsive browser than 1.5 for something that is constantly locking up with more than a few tabs. FF might be better for older, low end machines but now a days low end machines are shipping with 4GB and modest machines will have 8.
In my main rig, I have 8 currently, and would need to replace it all to upgrade (4x 2GB DIMMs) and presently can't afford to build a new one for at least a few months. I do not want a browser to waste 4GB where it would be 1 with all the bloat stripped out.
Great, now what happens when they do a google and stealth remove that feature?
(cf: The option to have the tab close button on the right, the status bar, the option to disable javascript, the option to have tabs in a sane place instead of on top...)
Somehow, I doubt disabling multiple processes will be within the scope of an addon.
You can switch to Palemoon, Waterfox, Cyberfox, or one of the other forks that still run all the extensions?
Firefox is not for lack of options, I'd say it is the *nix of the browser world.
ps. You can also solve all of the tab issues with TabMixPlus and the UI issues with ClassicThemeRestorer - my Firefox 33 looks virtually identical to Firefox 4
Yeah, I've already done that - Status4Evar, Classic Theme Restorer, Old Location Bar, Switch To Tab No More, Show Go!, etc. My point is I shouldn't need that many addons to get a normal browser out of the mess firefox is becoming.
This makes sandboxing possible which is a huge security win and the reason I still use Chrome where I can. RAM is cheap and browsers 64-bit so a bunch of extra memory in the name of security is not a big deal really. At least that's how I see it.
I'm curious how huge of a security win it really is. The vast majority of exploits that I hear about would still be possible. Especially since phishing is by far the weakest link in any model.
I mean, sure, it sounds great to say things are sandboxed. But when the actual exploits either a) root the machine, or b) targeted the user directly, it seems that protecting tabs from each other really doesn't do much.
It's not about protecting the tabs from each other; when you separate the tasks into different containers, you can apply strong limits to what each can do, which makes it harder to "root the machine".
I see, this is more about sandboxing the renderer. Not necessarily sandboxing the tabs. Right? Curious if one really required the other.
And, still, kind of amusing that the entire point of the browser is that it is sandboxed from the whole computer. Seems if we just restricted what the browser was capable of as a whole, we'd be there.
Trying to restrict the browser as a whole doesn't work for a couple reasons:
1. The browser as a whole needs to have permission to do quite a few things, including reading from and writing to the filesystem (for uploading and downloading files), talking to your system's graphical environment so it can display windows, and accessing arbitrary hosts on the network so it can access web servers. It's just not possible to meaningfully sandbox something requiring so much access. Individual browser components, on the other hand, can be designed to do very specific tasks and are thus easier to isolate.
2. You want to protect not only your system from a browser exploit but also other parts of the browser. A site that exploits a browser vulnerability shouldn't be able to read your cookies for another site.
These reasons imply that you need to focus on isolating and restricting components inside the browser instead of the browser as a whole.
Your last sentence is a better worded version of what I meant. That it is less that the tabs are isolated and sandboxed, and more that components of the browser are.
It is way harder to exploit a machine through a properly sandboxed process. Sandboxing restricts the process's access to filesystem and network. On Linux for example seccomp can restrict the number of system calls the process can make which further reduces the attack surface greatly. So to exploit a OS vulnerability through a sandboxed process you also need to exploit a vulnerability in sandboxing itself. That's significant.
Great, buy me another 32GB then please</s>. People shouldn't have to have high end gaming rigs to run a sodding browser.
Mozilla (and google) are deluding themselves in thinking that their particular software is the be all and end all of a computer, and should be introduced to this strange concept called 'multitasking'.
Windows,and multiple Linux distros. Windows needs >1GB to run smoothly, while most Linux gets by in <1GB, it still feels very sluggish if I can't actually get 1GB when it needs it.
It sucks that you're getting downvoted. I totally agree that this "RAM is cheap" thinking is problematic. Sure, if you only ran one program at a time on a computer, it would make perfect sense. But when I have a finite amount of RAM to spread among a multitude of applications, screaming "RAM is cheap" is bollocks. Never mind that on my current laptop I'm already at the max it can physically support, which means if I want more RAM, I have to buy a whole new machine.
Here's a thought for developers: Next time you find yourself saying "RAM is cheap" (or any variation thereof) thwack yourself about the head multiple times with a big stick, then go rinse your mouth out with soap and water.
Sometimes RAM is cheaper than engineer salaries. Sometimes it isn't. It depends on problem domain, scale, and what you're tying to optimize for. Debating this just seems pointless.
In any case, the number one evaluation criterion for browsers is not the RAM profile.
If performance matters that much, perhaps you should instead be asking why we are writing applications and UI in an ugly evolution of SGML.
Before you fault the Moz devs, ask yourself why we have CSS for high DPI images and why designers embed videos into website headers. Maybe your needs aren't at all times reflective of the majority. The browser serves the spec. The spec serves every case you care to imagine. Because that's what we evolved a doc format to do.
(Just imagine a parallel universe where it was instead MS Word that evolved into a facebooking client!)
I'm not faulting the Moz developers per-se, just saying that - as a group - developers should not just throw up their hands, shout "RAM is cheap" and then be totally cavalier about how their programs use RAM. It's a refrain that has become, IMO, all too common, and I think it's harmful.
Sure, Firefox, and every other program should use as much RAM as it needs. But we should be mindful of keeping that need as low as possible, at least for any program that falls into the category of "runs in a multi-program environment and probably won't be the only program running from a finite RAM pool".
I run firefox on Fedora 20, on a dual core AMD64 from 2005. It worked ok with 1GB, but as indicated things went south when it ran out of memory. I upgraded to 2GB and useage rarely goes above 1, so everything is fine now. That said, I'm not one of the people who use tabs instead of bookmarks. My expectations are probably lower than many too, but I am all too familiar with content in one tab destroying performance of the browser as a whole. So I'm all for this e10s thing.
I think performance is often overblown on this as well. However, I don't think the overhead from task switching will really enter into it. If you have hundreds of tabs open that are trying to do work, that is work that was having to be done regardless. Meaning they would already be thrashing. Now, that will just be in a different way.
Otherwise, if you are idling most of those processes, they should be the same as before. Just idle.
Memory, on the other hand, I think will go up. Not sure by how much, though. I would think the rendered content would be the largest consumer of memory. And again, if you have hundreds of tabs open with high memory usage per tab, that should already be a problem. Right?
Talking about "memory usage per tab" doesn't make much sense unless you talk about what's in the tabs. It's rather like talking about "memory usage per application" on your desktop and not mentioning whether the application is ed or OpenOffice.
Suspending JS execution in invisible tabs seems highly unlikely to be web-compatible; you wouldn't want YouTube or Spotify to stop playing just because you focused a different tab. On the other hand, with technologies like requestAnimationFrame, we can make it possible for well designed applications to work well when in the background.
Right, this is just memory, though. And I am curious on whether or not this counts shared memory correctly between tabs.
My point for performance, though, was mainly that if you have 200+ tabs that are actively doing something, then you are thrashing even in firefox. It isn't like they just do their work for free depending on the process model.
Also, 200 is not exactly a large N when we are too worried about scheduling, is it? (That is, unless all 200 are cpu bound, in which case, again, firefox would already be thrashing.)
Because, as I said earlier, "you're going to feel it." If your tabs are allocating in the same process address space as each other, accessing your current tab's data is likely to cause swapping. The key point is that electrolysis and chromium-based browsers can page inactive tabs "without affecting performance of the current tab."
This isn't really intuitive, though. When you switch to a tab, either you page in that tab's memory for the current process. Or you page in that tab's process and memory.
That is, by going to a "per process" approach, the amount of memory that gets paged in almost certainly went up. No?
The amount of memory that gets paged in went up, but the whole point is that you never have to page any of the current tab's memory while you're using it, unlike with the monolithic 1 GB Firefox process today.
I feel like I must not have been clear earlier because I have made just a single point, and both of you are discussing things not related to that point. Let me try one more time: it's all about performance of the current tab.
I still don't see the scenario you're describing, though. Let's say you have two tabs open, A and B. Then some set of pages will have memory for A, and some set will have memory for B. If you're not using A, then the operating system will notice that those pages have not been used in a while and if under memory pressure it will page them out. This won't affect pages with memory for B.
The only issue would be if most of the memory allocations are under your system's page size (typically 4096 bytes) and distributed randomly, so that a lot of pages have data structures associated with multiple different tabs. But I think that's unlikely, and even if it is true, couldn't it be resolved by making your allocation strategy tab-aware (e.g., by giving each tab its own malloc arena, which would be way simpler than splitting the browser into multiple processes).
You get it. The one thing you're missing is that application-aware allocation is hard and something that Firefox engineers continuously have to work on (http://blog.pavlov.net/2007/11/10/memory-fragmentation/), even as they add new features and performance caches. The electrolysis/chromium solution makes the problem disappear entirely (along with some security problems) without any possibility of regression.
Right, and the counter point is that this should not help there. If the other tabs are idle, than they are idle whether in process or not. If they are all cpu bound doing stuff, then your machine is again in trouble, current process or not. If they are all constantly thrashing in resources, you are still in trouble.
Why would you think the current monolithic would page out a tab that was actively being used? Why would this not also happen in the "per process" approach?
That is, how would this specifically help? If you are actively using the memory for the current tab, because it is the current tab, why would it be swapped out under the monolithic case where it would not in the per process case?
I'm perfectly willing to accept there is a scenario I am not considering. I just don't see it, right off.
Because the tab you're using has barfed its data all over the heap, so you can't keep all its data paged into main memory. Its data is mixed together with the other tabs' data.
At face value, I don't see why this should necessarily be the case. That is, why would a tab "barf its data all over the heap?" More directly, why couldn't, as the sibling said, this have been addressed with a different allocation mechanism.
I'm assuming this has come up a fair bit. Any good links to read up on this?
This is the must-have feature that keeps me using Firefox -- especially when I'm doing research, which has a naturally tree-like pattern. Apparently, from the Chromium bug, it's a dealbreaker for lots of other people as well: https://code.google.com/p/chromium/issues/detail?id=344870
It helps that Firefox offers to change to an existing tab when you try to open the same page in a new tab.
Another suggestion is to not manage it. I create tabs all the time, and often have many similar tabs. There's no need to manage it, only to clean up once in a while.
(I like many tabs. A few weeks ago I performed some tab-cleaning -- 550 tabs were a bit much, as it made Firefox start slower.)
Personally i just use it for things I would like to read at some point, instead of filling up my bookmarks with 50-10 entries every day.
>It helps that Firefox offers to change to an existing tab when you try to open the same page in a new tab.
Disabled that as soon as it was dumped on me. There are an almost infinite list of reasons to have multiple copies of tabs.
Honestly, I cleanup every week or two, and it works fine. Windows are by category of different things I do, and I tend to leave frequent sites open all the time.
The TabPolish Chrome extension implements similar behavior (in a slightly different way; it doesn't hook into the suggestion menu, and duplicate detection is per window, rather than per browser instance).
I find it essential to manage the mess that is my Chrome tabs.
I deal by creating multiple windows, ideally one per general topic/use case. This works great with Chromium because you can easily drag multiple tabs between windows to organize things, and the shrinking tab bar encourages you to close things you don't need. Doesn't work so well in Firefox, where you have to manually drag each tab over one by one, doing so is kinda glitchy, and the scrolling tab bar makes it easier to just fill a window with a ton of unrelated tabs.
Although I don't get anywhere near 200 tabs, I hit the same type of UI management issue. I use contextualized "sessions" using panorama in Firefox (Ctrl-Shift-E). I have 10-20 tabs per context, and somewhere between 6 and 10 contexts. Between Panorama and TreeStyle tabs (a plugin), everything is neatly organized.
Panorama is a life-saver if you have to context-switch between projects regularly.
TreeStyle, really, I use more as a means to move screen real-estate to horizontal usage on my laptop's 16:9 screen. Tab organization is a side benefit.
In the big picture, sandboxing may actually help memory consumption in the long run. When a tab close, a process close, and the OS ensure that all memory for that process is freed as well.
I have to say though, with the introduction of the GGC, and all of the improvements to the JS runtime recently in Firefox, that it has gotten much faster.
Electrolysis should bring "smoothness" to the entire UI as one site should not be able to halt the UI thread for the rest of the browser any more.