In this article, Mozilla refers to new HTTP headers, COOP and COEP, that can be used to opt-in to cross-origin isolation and thereby grant access to features that would be dangerous under Spectre side channel attacks.
IMO, this Google doc is a better explainer of COOP and COEP, how they work, and why they help.
> "We now assume any active code can read any data in the same address space. The plan going forward must be to keep sensitive cross-origin data out of address spaces that run untrustworthy code, rather than relying on in-process checks."
I may sound like a grumpy old man, but I really really dislike this non-stop influx of complexity into every level of the stack. This is a clear case of an abstraction leaking. Microprocessor vulnerabilities should never lead to changes in a high level application protocol like that.
The work needed to implement an HTTP server is growing and growing and growing. There was some speculation a few days ago on why this is happening, and why big companies are benefiting from this. I don't think there's any conscious conspiracy anywhere, just a lot of people try to make a name for themselves. (There was a discussion on this a few days ago here: https://news.ycombinator.com/item?id=23833362)
But I just hate this growing complexity everywhere. HTTP can and should be much simpler than it's becoming.
None of these changes impact HTTP server code. Pages that run javascript might want to set some new headers, but setting headers is something that web servers always have to do. Changing a site config to send some new headers is work for a web page creator, not the HTTP server developer. And the changes are backwards compatible, so you don't actually need to do anything if you don't want to.
This new complexity is about javascript execution, and creating a browser that can run javascript has never been simple!
Can I one-up your rant? We're in an era where complexity has gotten so out of hand, our phones catch on fire under the computational burden of sending text messages!
> I don't think there's any conscious conspiracy anywhere
Nor is it required, see evolution. Completely non-directed, but the beneficiary still benefits which reinforces the change.
It feels like some kind of selection pressure where big companies pollute the world with complexity to remove competition. This can be intentional of course, but even if it's an accidental random mutation then the result is the same.
It's better performance for one web site, at the expense of whatever may be in other tabs or even outside the browser.
I really don't want to grant that.
Web browsers are failing badly at resource controls. I don't want to hand my whole computer over to a greedy web site. I expect my browser to stop the abusive web site resource consumption, not enable it.
While I agree that browsers generally currently appear to exercise poor control over resources like cpu time and memory amount, I'm not sure that aiming to cripple all sites equally is the right tactic here. It seems like better "scheduling" and other resource management would be something that needs work regardless of whether we expose tools to improve web application performance to websites.
Notably disappointing: Cross-Origin-Opener-Policy must be set on a page to isolate it as the default value is not safe.
i.e., if I have a page with sensitive client side data, I need to set this header to prevent a page on a different domain from opening a popup to that page then doing a Spectre-like attack on my web page and exfiltrating that sensitive client side data.
I don't actually entirely understand the purpose or effect of Cross-Origin-Embedder-Policy. I thought browsers already blocked cross origin requests without CORS headers in the response that allow it. Does this header UNDO that by default?
> I don't actually entirely understand the purpose or effect of Cross-Origin-Embedder-Policy. I thought browsers already blocked cross origin requests without CORS headers in the response that allow it.
CORS applies to XHR/fetch APIs, not browser loading of subresources specified in the HTML of the page.
COEP optionally extends CORS-type protection to subresources.
Imagine what the world would be like if didn't need to worry about bad actors. This is a considerable amount of engineering energy to make this safe.
I know safety isn't all about about malicious attacks, but I like to imagine living in a world without the need for locks, passwords, keys, safes, signatures, contracts, lawyers, ... we would probably all be fed and populating the solar system by now.
I have a hard time imagining such a world precisely because being a bad actor is a part of our DNA and the DNA of many many animals, plants, bacteria, and fungi we interact with. The unintended consequence of lack of selfishness could be mind boggling. Think about it this way: Silicone Valley mostly provides the last 5% of the tech needed to make any given product. You know where the rest comes from? Military tech that eventually makes its way to civilian use. Things like semiconductors, the Internet, GPS, rocket propulsion, etc. No conflict == no military == no tech == we wouldn’t be having this conversation. I am definitely not saying we couldn’t all be a whole lot nicer to each other. But I am saying that we wouldn’t be human if that was wired into us.
The military's role in this seems very much rooted in selfishness and adversity, to the extent that a more cooperative species (let's be honest it's not gonna be us) would find plenty of non-military motivation to do similar amounts of basic and applied research.
Maybe in German. I personally regularly curse the fact that I was born a human when I am confronted with uniquely human problems: having to cook, sweating my behind off, having to waste time using the bathroom, experiencing strong emotions like anger or jealousy or envy. These are the things that make us human, but they often times I find them highly annoying and wish my ancestors had evolved out of some of them.
Your estimation of the military is way off, I suspect.
Hey, these days the military is using consumer grad iPads, because they are cheaper and good enough.
There are more things in heaven and earth than just the military and SV.
One big example that's often brought up of military tech leading the way is programmable computers. And, yes, in our history that's what happened. But International Business Machines was hot on their heels purely for commercial computing. (If the militaries of the world hadn't blown up so many resources, business would have likely come up with programmable computers a few years earlier, thus completely eliminating the gap.)
I am not saying that any and all tech comes from the military. I am saying that the military is and always has been the driver for innovation more so than any other industry. IBM derives a large share of its profits from tech originally funded by the military for military purposes. The roads we drive on, the cars we drive, the semiconductors we use, the tampons and pads we use, are all pretty direct examples. Lots are indirect. The military also uses duct tape and drives Jeeps. Both were originally invented and built for the military in 1940s. DARPA is still a huge contributor to what we will be taking for granted 20 years from now. There are other researchers of course, I'm not disputing that. This year's National Science Foundation in the US has a budget of $7.1 billion, while military research combined totals $59 billion.
Ancient Rome was very military driven. They also had classes of people, which I'd argue is about as selfish as you can get. They could absolutely land on the moon with enough time.
In that world, since you don’t have bad actors, you would likely be ruled by a benevolent king or queen.
A large part of why have democracy and separation of powers is to protect from bad actors.
But yeah, not having to deal with bad actors can enable massive achievements. The pyramids were built when Egypt was more or less safe from any external invasion and ruled by god-kings who could coordinate massive building projects.
The problem with an enlightened and benevolent ruler is that even if he really is that good (and history uncovers a lot of dirt even for rulers popularly known as enlightened and benevolent) what do you do when they die or decide to quit ? Their children could be insane psychopats unfit to rule (as indeed often demonstrated in history) or there would be a civil war with various factions trying to fill the power vacuum.
With all its faults democracy has these edge cases handled and makes the elected group of rukers much more acountable with mens to get ridd of they if they turn out to be too unfit to rule.
Yeah, it has always been in the thoughts of mankind to have a benovolent, righteous king to even the point where the natural world and the animal kingdom would be in harmony. If the events of Isaiah 11 ever come to fruition, I imagine we'll cure cancer, have quantum computers, and unlock the mysteries of the universe.
You can still gain efficiency by centralising services. If you have centralised services you need some mechanism to coordinate them and make decisions about how they should work, and those decisions should ideally take input from the people using the services. Even in a no-bad-actors world you still have many different competing interests to balance due to different people's needs and preferences, so you probably want some kind of polling/citizens forum/whatever to gather input for making decisions, and an organisation that looks at all the details and makes the decisions on behalf of the collective.
Or by example: You still want someone to organise weekly garbage collection efficiently and equitably. The government does that.
Regular private companies work pretty well for coordinating and providing paid for services. Especially something like garbage collection.
Of course, details depend for example on what our 'no bad actor' assumption actually means, and perhaps on what ideas you have about human nature and history.
> Even in a no-bad-actors world you still have many different competing interests to balance due to different people's needs and preferences, so you probably want some kind of polling/citizens forum/whatever to gather input for making decisions, and an organisation that looks at all the details and makes the decisions on behalf of the collective.
Yes, you'll want some kind of organisation. My comment was just that the organisation would probably not look like a monarchy, benevolent or otherwise.
You could also go by something like 'You are not a bad actor, if (outside of emergencies) you don't lie, cheat, murder or steal.' Or you can go with something closer to game theory, and emphasis cooperation in something like a prisoner dilemma.
So you're saying our genes are going to compete and evolve so that we no longer compete and evolve? That's not how it works.
But a less flippant way to put it is that a world without "bad" actors isn't a stable equilibrium. It cannot and will not ever happen. If you need convincing, look to nature. An equilibrium is only formed through a multitude of mixed strategies.
Artificial selection might result in what you desire, but that would be.. unpopular, to say the least.
Nice to see it is eventually going to be re-enabled, however if Firefox doesn't make it as easy as Chrome, that is what most costumers will focus on regarding applications that make use of threading alongside WebAssembly.
You only need this if you're doing multithreaded WebAssembly. The typical applications that need that (e.g. games) also require a huge amount of infrastructure, let alone a web server config.
Yep, but like rolling an XML web service or posting html forms, that would (be far too boring) || (come with its own set of trade offs).
If we could get an open-source flash player that was not so insecure and/or make an open source equivalent to Adobe air that spits out web standard code, we would not lose the flexibility that adobe flash gave us.
we really don't have an answer to losing Adobe flash, similarly to when we lost the spaghetti code enabling yet so pragmatically useful visual basic 6.
Only in specific situations, where the site is using Apache and has .htaccess files enabled. I would argue that using Apache in the first place is non-optimal, but enabling arbitrary .htaccess files for clients is also a potential disaster.
Then again, I suppose there are enough people out there who just want to FTP up their wordpress code and call it a day, so... ugh.
And boy do they, I used to be a shared hosting administrator and sometimes people would do stupid things like create 10MB .htaccess files with the subnets of all of the countries they wanted to block and then would call to complain that their site was loading slow. (Probably not a great idea to parse a config file for every request but at least it exists)
"The system maintains backwards compatibility. We cannot ask billions of websites to rewrite their code."
I don't understand this requirement. Very few sites use SharedArrayBuffer, those few that do probably had to rewrite code to deal with it being disabled.
I also don't understand how cross-origin has anything to do with it either. Either your sandbox works, in that case cross-origin isolation shouldn't matter, or it doesn't work, in which case cross-origin isolation is not a real protection.
Am I missing something here?
Firefox is only maybe 5% of users and it has other performance problems, if SharedArrayBuffer doesn't "just work" then I'm inclined to have them take that performance hit or use a different browser.
Under Spectre, if the attacker can run SharedArrayBuffer code in your process, even "sandboxed," it can read memory from anywhere else in that process.
So I guess you're right that if the sandbox "works" you don't care about cross-origin isolation, but it turns out that sandboxes don't work if you run multiple sandboxes in the same process.
The mitigation browsers have chosen is to isolate each origin in its own process, preventing other origins from communicating with it. To regain access to SharedArrayBuffer, you have to opt in to this extreme form of cross-origin isolation.
It would be nice to just make the whole web default to cross-origin isolation, but tons of websites rely on cross-origin communication features, and browsers can't just force them all to be compatible with isolation, so isolation has to be opt-in.
How exactly does site-isolation prevent cross-origin communication that doesn't rely on SharedArrayBuffer, i.e. that vast majority of use-cases? It's just message passing.
I can see that site-isolation is arguably too expensive on mobile and why you might want an opt-in mechanism there, somewhere down the line.
However, I don't think there are good arguments for not just enabling it on Desktop right now, without making developers jump through hoops. Until Chrome enables SharedArrayBuffers on mobile, I have no reason to care anyway.
It doesn’t need to, since that communication is consensual: the sender must explicitly send the information, and the receiver must explicitly be interested in it (and can check what origin it is from). The problem with SharedArrayBuffer (with Spectre) is that is allows the “receiver” to read whatever it wants from the other origin, just by virtue of ending up in the same browser context.
Site isolation disables all of it. With "Cross-Origin-Embedder-Policy: require-corp," you can't even embed a cross-site image unless the other image allows it with a "Cross-Origin-Resource-Policy: cross-origin"
Enabling that on desktop today would break every website that embeds cross-origin images, e.g. everybody using a separate CDN for images would be broken.
You're describing how this proposed cross-origin isolation scheme works. I understand that, I don't understand why it is necessary to make it work that way.
Chrome has been doing site isolation with multiple processes for a for a while, it "just works" and it doesn't break sites.
Site isolation and origin isolation are separate concerns. In the "origin isolation" model, you need to ensure different origins are in different processes, and that their data don't leak from one to the other. In site isolation, you only care about tabs not being able to communicate with each-other.
Also, you seem to be missing something: Chrome is going to implement the same set of headers, with the same set of restrictions when they are applied. This isn't an arbitrary firefox decision, every web browser is expected to follow suit. See the various mentions of "chrome" in https://web.dev/coop-coep/
Chrome’s site isolation doesn’t solve the “image from another origin” problem. Those still exist in the containing origin process’s memory. It solves the “frame from another origin” problem, which is the more acute issue but not the only one.
Why? It's a straightforward matter. You can have the conventional behavior with the necessary limitations to which everyone has adapted, or you can opt in to a modified environment with new rules that would break some sites but provides additional capabilities.
> Am I missing something here?
Yes; the clearly explained rational is somehow being missed. The sandbox is an OS process as necessitated by Spectre. Without the new opt-in capability content from multiple origins -- some of them hostile -- are mixed into a process, and so the shared memory capabilities must be disabled. This new opt-in capability creates the necessary mapping; when enabled the content from arbitrary origins will not be mixed in a process and so the shared memory and HRT features can be permitted.
> Without the new opt-in capability content from multiple origins -- some of them hostile -- are mixed into a process, and so the shared memory capabilities must be disabled.
That's an arbitrary requirement on the part of Firefox developers, and it's a security issue in its own right. Any of the numerous exploits that regularly show up in Firefox could take advantage of this, not just Spectre.
Chrome has site-isolation enabled by default, at least on Desktop, I don't see why Firefox shouldn't follow suit.
This is a concern somewhat orthogonal to site isolation as implemented in Chrome.
Say you have a web page at https://a.com that does <img src="https://b.com/foo.png">. That's allowed in browsers (including Chrome with site isolation enabled), because it's _very_ common on the web and has been for a long time, and disallowing it would break very many sites. But in that situation the browser attempts to prevent a.com from reading the actual pixel data of the image (which comes from b.com). That protection would be violated if the site could just use a Spectre attack to read the pixel data.
So there are three options if you want to keep the security guarantee that you can't read image pixel data cross-site.
1) You could have the pixel data for the image living in a separate process but getting properly composited into the a.com webpage. This is not something any browser does right now, would involve a fair amount of engineering work, and comes with some memory tradeoffs that are not great. It would certainly be a bit of a research project to see how and whether this could be done reasonably.
2) You can attempt to prevent Spectre attacks, e.g. by disallowing things like SharedArrayBuffer. This is the current state in Firefox.
3) You can attempt to ensure that a site's process has access to _either_ SharedArrayBuffer _or_ cross-site image data but never both. This is the solution described in the article. Since current websites widely rely on cross-site images but not much on SharedArrayBuffer, the default is "cross-site images but no SharedArrayBuffer", but sites can opt into the "SharedArrayBuffer but no cross-site images" behavior. There is also an opt-in for the image itself to say "actually, I'm OK with being loaded cross-site even when SharedArrayBuffer is allowed"; in that case a site that opts into the "no cross-site images" behavior will still be able to load that specific image cross-site.
I guess you have a fourth option: Just give up on the security guarantee of "no cross-site pixel data reading". That's what Chrome has been doing on desktop for a while now, by shipping SharedArrayBuffer enabled unconditionally. They are now trying to move away from that to option 3 at the same time as Firefox is moving from option 2 to option 3.
Similar concerns apply to other resources that can currently be loaded cross-site but don't allow cross-site access to the raw bytes of the resource in that situation: video, audio, scripts, stylesheets.
I hope that explains what you are missing in your original comment in terms of the threat model being addressed here, but please do let me know if something is still not making sense!
Keeping image/video/audio data out of process actually sounds kinda reasonable to me :-).
I think the really compelling example is cross-origin script loading. I can't imagine a realistic way to keep the script data out of process but let it be used with low overhead.
Oh, I think it's doable; the question is how much the memory overhead for the extra processes is.
I agree that doing this for script (and style) data is much harder from a conceptual point of view! On the other hand, the protections there are already much weaker: once you are running the script, you can find out all sorts of things about it based on its access patterns to various globals and built-in objects (which you control).
To safely use SharedArrayBuffer you have to give something else up, like the ability to fetch arbitrary resources with <img>. Most sites that want SharedArrayBuffer would be fine with a tradeoff like this, and so this post describes a way they can opt in to the necessary restrictions.
> I also don't understand how cross-origin has anything to do with it either. Either your sandbox works, in that case cross-origin isolation shouldn't matter, or it doesn't work, in which case cross-origin isolation is not a real protection.
It doesn't work in general. It kind of works if you're putting each sandbox into its own process. Assuming there aren't any undiscovered microarchitectural attacks at the moment.
This is either astoundingly bad reading comprehension, or a willful mischaracterization. The whole article is about what they're doing to prevent Spectre-like problems. The config option is their backup plan in case they made a mistake in their threat analysis.
All in all, I thought it was a sober and mature approach to the problem.
IMO, this Google doc is a better explainer of COOP and COEP, how they work, and why they help.
https://docs.google.com/document/d/1zDlfvfTJ_9e8Jdc8ehuV4zME...
> "We now assume any active code can read any data in the same address space. The plan going forward must be to keep sensitive cross-origin data out of address spaces that run untrustworthy code, rather than relying on in-process checks."