Website operators will need to do nothing to have the powerful
protection of SameSite by default
Alternative phrasing:
Website operators who depend on the existing behavior will have to
make changes to have their sites work in Chrome at all
IMO this is a useless default - sites will still need existing CSRF protection techniques since not everyone runs Chrome - that will marginally increase security on small fraction of sites (that DGAF about CSRF) and break a small fraction of sites.
It gets better: the linked Chrome feature "Cookies default to SameSite=Lax" mentions a bug in Safari (fixed in a not-yet-released version) that causes cookies with "SameSite=None" to not be sent in cross-site requests. So when Chrome 80 comes out, your options for setting cross-site cookies will look like:
* send nothing, and break in Chrome 80+ because of the new default
* send "SameSite=None", and break on any Safari besides the latest because of the bug
* send anything else, and break because that's what SameSite does
ITP 2.1 also already says it will block cookies of this type if it matches Safari's completely non-transparent ML model for "tracking cookies". Also Chrome pushed back the release date of this new default from end of August 2019 to February 2020 and the bug has been "fixed in a non-yet released version" of Safari since early June (safari is really slow to release security patches).
No. They have a bug in their implementation of the spec, but Chrome is going off spec here, and dropping breaking changes into the ecosystem in a way that requires developers to UA sniff.
The Safari bug is annoying today, but ultimately didn't impact anyone that was spec compliant. Chrome does impact people who are spec compliant.
Yup. This is a good change and should have been the default from the start. But it wasn't, so you need to play by the rules - standard first, let the framework developers at least implement this so it doesn't require header hacks, and then release it.
Or: they don't need to play by "the rules", because the horse drags the cart and not the other way around. Sometimes major vendor fiat decisions are abusive, and sometimes they're productive; you have to judge them on their merits, and not by their compliance to a phony law of standardization.
That's just not a valid argument. Standards groups don't tell developers what to build. Developers tell standards groups what they have built, when they want what they're building to be interoperable.
The notion that vendors are beholden to standards groups is a real problem. It's what got us stuff like Heartbleed.
The problem here is that Google at this point is essentially forcing everyone to do what they want. Maybe it's for the better this time, but often enough it's for the worse, and I don't trust their intentions.
Do you mean to say that you think that the OpenSSL project feeling obliged to implement the heartbeat extension created by a standards body is significantly more to blame for causing Heartbleed than the (understandable) causes for the general quality of the OpenSSL project code base (like lack of funding, etc)?
OpenSSL is one of a number of projects (this is maybe more prevalent in Free Software but it's hard to tell) which takes the approach of a hoarder rather than curator, it has got better, but that's definitely how it got to where it was when Heartbleed happened.
In these projects rather than try to solve some particular problem or group of problems and use standards on the path to that solution, the project just throws together whatever happened to attract somebody's interest in a standard into a big heap of cool toys without rhyme or reason.
I think we actually could have blindly got lucky with Heartbleed, it could easily have been the case that to make this extension work you needed to add 40 lines of custom code to every program even though it would always be identical boilerplate code. After all it took them years to add a sane API for "Just check the bloody hostname in the certificate matches". But, that isn't how it worked out.
If you compare Python's "batteries included" philosophy, OpenSSL and a few other libraries take something closer to: "I just keep everything in this old cardboard box, try looking in there?". And sure enough there are batteries, although they seem to be covered in a sweet-smelling sticky substance, there is also a broken Gamecube, one cufflink with a brand logo you don't recognise, a chocolate bar dated 1986, a PS/2 to USB adaptor, a C60 cassette, two dried-out PostIt notes, one sock, a 40cm USB cable with a mini-B connector, and the spare fuses from a 2005 Ford Focus...
Okay, but this metaphor is completely nonsensical when applied to standards, because if you plug the parts of reality into the parts of the metaphor, the horse is part of the coachman.
Developers determine standards, and it would be pointless to make standards any other way because the standards wouldn't ever be implemented if there were no developers who wanted to implement them.
OK, fine - this will cost just my developer community alone 10-50 million dollars in rework. Maybe that's acceptable to the Chrome team, but I'm the one sending that mail to a hundred thousand developers, not them. So I weigh it differently.
Give or take 2 of the 5 largest OIDC providers. This change broke form_post in OIDC, so every app using it (low 5 digit count) needs to update and redeploy.
This isn't Chrome going against the spec. This is the spec's author (who happens to work on Chrome) changing the spec.
Since the spec is still in draft, changes are expected. This isn't even the first change, we're already on version 3. And this version expires next month, so we're literally, explicitly, due for a new revision.
Your argument is bad and you should feel bad. The status quo is stupid.
So the Chrome team wants to make their own users safer. Good for them. In the absurdly stupid scenario where sites are making cookie-mediated blind cross-site POST requests, they'll have to come up with a better interaction model. Once we figure out who these hypothetical idiots are and whether they exist at all, we can all shed a tear for them and the extra 7 hours of work they'll have to do in order to adjust. But in the mean time, all Chrome users become immediately immune to an entire class of vulnerabilities. As far as trade-offs go, I'd take it. Good on you, Chrome team.
Sure, site owners can't just relax yet and call the problem transparently solved; legacy browsers will continue to exist for years. But that's hardly a reason not to make progress.
The spec compliance argument is utter nonsense. You know what else broke the spec? Ad block. No-script. Popup blocking. Private browsing. Third-party cookie blocking. Pretty much everything that has made any particular browser "good" has been a deviation from the spec.
Nah. You have a blind spot because I'm guessing you're not responsible for a developer ecosystem, otherwise you'd not spout off with absolutes, especially when you're just wrong.
These "hypothetical idiots", in my case, are developers who used the off the shelf libraries and SDKs built to spec, that rely on 3rd party cookies for things like logging in users and logging them out.
You know the number one use case for 3rd party cookies in that scenario? Ensuring no one has injected or otherwise fraudulently initiated an auth request when the IDP posts back to your site. That forgery check is now broken. In the words of a wise gent from a couple years back, congrats, you played yourself.
Want to set samesite in your language of choice? Enjoy setting headers manually now and praying, because even if your framework does have samesite support it's probably going to emit nothing if you choose none, because that's the spec.
Actually most of the JS cookie libraries I’ve used have an explicit “None” string that’s separate from a null or disabled setting. And the spec is carefully written such that it doesn’t actually say whether the default is None or Lax, it’s now unspecified...
Someone once shared a mantra that later became one of my basic design principles, especially when introducing changes to things that people already use: "Don't break working systems."
I wish more developers understood why this is important.
I believe the phrase "things that people already use" answers your question.
> To some people, a system with a security vulnerability isn't "working".
Yes, we have seen that argument more than a few times before, usually from someone justifying their personal agenda without regard for its impact on other people, despite the availability of less disruptive options. Ego is a funny thing.
Obviously, there are times when lack of immediate action would have dire consequences, but I think it would be disingenuous to claim that was the norm, or to claim that this Chrome change is an example.
It's funny, because for people who care a lot about security engineering, "my system should continue running as it is even if it exposes its users to preventable security threats" also sounds pretty "egotistical".
It really doesn't. Either the system is working, or it isn't. People might be using it, but people use all sorts of broken systems.
It is indeed worth weighing the cost of making a breaking change, but given that it happens all the time in all sorts of projects, it's clear that only very few projects subscribe to your notion that breaking users are an absolute sin.
Even Linux has broken userspace programs when fixing bugs. It's only happened a few times, but it happens.
At home I've switched back to Firefox. Seems like I need to complete that transition at work to continue enjoying things like ad blocking and functional sites.
In fact, maybe sites should start recommending something like that. (Microsoft's wallet sense must be twinging.)
I switched back to Firefox recently as well— the containers plugin has me completely won over. It takes a while to get in the groove of it, but surfing the internet and not being logged into Google and Facebook everywhere is great and totally worth it.
(I recently also switched back to Windows 10 after a decade+ on Mac OS. I never got satisfactory performance from Firefox on Mac, so part of trying it again was hoping that the perf story on Windows was better, which indeed it is.)
Switched to Firefox for work and home. It feels like Chrome did in the early days. No, it’s not faster than Chrome. Yes, there are a couple of dev features that I no longer find intuitive. I just mean in the sense that it is really focused on users. Chrome is focused on advertisers. The change has mostly been painless, and I’m a Web developer.
At this point I just don’t trust Google nor Chrome. All the incentives are there for Google to turn Chrome into adware. The evidence is pointing in that direction. And the trust is no longer there.
It’s funny to think about (and this is of course a gross caricature) but Chrome feels more and more like they took all the annoying adware/bloatware from the 2000s and disguised it in a browser—except they figured out how to do it without anybody noticing. Today, we are all Grandma.
What is the point of sending cookies to cross-site domains? I never got this. We have postMessage today, and iframes. We have API client information. Third party cookies are restricted anyway, now.
As a user, when I visit a site, I never invited some third party to run their code on my computer, and I certainly don't want some third party making requests to the server on my behalf. So screw websites that depend on this bug.
Just to clarify, CSRF is when a website you visit makes requests on your behalf to a third party (which you have visited/logged into). The third party is also a "victim" of the attack.
> I never invited some third party to run their code on my computer
This can be achieved by disabling JavaScript (e.g. NoScript extension).
As far as I know, there's no extension that can entirely prevent a website from loading resources from other another domain name. If there is, I imagine it would cause a lot of websites to lack styling, images/videos not loading, etc.
Yes, I knew this, and I said the wrong thing anyway, and I am sorry. I didn't think through what I was saying clearly.
My intent was more of an "Old Man Yells At Cloud" thing; I was just kvetching because the internet isn't as user-focused as it could be. Calling a bunch of resources from all over the place is a user-hostile behavior, and I'm generally unsympathetic when sites break because browsers narrow the range of user-hostile behaviors they support.
I'm sure this is secretly considered a feature by at least some people inside of Google. Safari's share of the market is already quite small and most MacOS users are able to figure out how to install Chrome (most do so already). If sites magically stop working in Safari, it helps to drive more of that already-weak marketshare to the "just works [because we broke it]" alternative of Chrome.
All iOS browsers have to use Safari's engine, so users who download Chrome gain nothing in this regard. The only choices available are either to ignore iOS or support Safari.
News flash, absolutely no one care about chrome breaking the web because Google owns the web. Literally, they're the biggest ad agency, biggest captcha provider (v3 will be on every page if they get their way), biggest browser.
Google can and will do whatever they want and there is no amount of complaining that will actually change their mind. Just lookup how they hamfisted their autoplay into the ecosystem despite breaking every website. YouTube is whitelisted to not need to follow their autoplay guidelines though :)
> News flash, absolutely no one care about chrome breaking the web because Google owns the web.
No they don't. Eight out of the Top Ten websites are not owned by Google.
If Chrome became the browser that broke major websites all the time, people would switch really quick.
> Just lookup how they hamfisted their autoplay into the ecosystem despite breaking every website.
It didn't break "every website", because most websites never had any obnoxious autoplaying media to begin with.
Popup Blockers also broke "every website" that had the guts to open pop ups, most of which were not solicited by the user.
In either case, strong user preference was behind these changes. It didn't need some consortium of conflicting interests to standardize acceptable website behavior.
If users didn't matter and it was all about Google's interests, you wouldn't have adblockers in Chrome. You wouldn't be able to watch Youtube flawlessly without ever seeing any ads.
The point of the SameSight=Lax default is that it doesn't require any application developer intervention, and it doesn't impact 99.9% of cookie uses. It's what the default behavior should have been decades ago.
You will still need explicit CSRF defenses for the next several years. But sites that employ CSRF defenses today still have CSRF vulnerabilities, just like how they still have XSS vulnerabilities despite vdom frameworks and default output filtering. It's better to have a failure mode where modern browsers are effectively immune to the attack than one in which everyone is vulnerable.
> When SameSite first came out, nobody wanted to make it the default. It could break things, change expected (legacy) functionality and generally speaking the worry of breaking stuff stops us making progress.
So, when it first came out, it was not okay to break things to turn it on by default. But, now it's okay to break things by making it the default.
I'm starting to feel like we're back in the Internet Explorer 6 age (in terms on having one browser that dominates and can arbitrarily change the rules/standards of the web).
I was mocked at at Black Hat when I asked a question several years ago why people don't consider forward proxying all third party dependencies for the user. Then we could disable CSR entirely. They said that doing that would fundamentally break the web. I think that it would make it so your user could never be MITM or DNS-spoofed to a malicious imposter for that dependent service or asset.
I'm sorry you were mocked, it's a good question. That's an option at the individual site level – and it may be possible to use a CSP policy to enforce that no cross-site requests are made. You can definitely use CSP policy to enforce what domains scripts may be fetched and executed from.
However if the standard were changed to disallow CSR in browser implementations then existing sites would be broken, i.e. it would break the web. Sometimes that's worth it, sometimes it isn't, and it's a careful balance between how many sites would be broken vs the impact of the security vulnerabilities that would be prevented.
(disclosure, I work at Google, but not on Chrome (any more) and have no particular knowledge of the CSRF cookie change)
Whether it is a good idea is going to depend on a whole bunch of architectural factors.
If it's a static JS library, say JQuery, then self host it seems sensible. You can host on your own CDN if needed.
What about if it is, say a credit card processing service? Well now you are dealing with credit card data through your site, so perhaps PCI compliance might apply?
Also you are now being the MITM to avoid an evil MITM - so you have more responsibility than every to ensure security. Make sure your proxy is secure as possible. Does it accept bad certificates for example?
In theory, we can still enable end-to-end encryption while forward proxying by a separate service provider, but it might not be easy to implement. The browser would need to be smart enough to route packets through the website and verify SSL coming out of the PCI environment. This will require modifications to the OS to dynamically add static routes for dependencies.
1. Browser loads the shopping cart page on example.com to transmit credit card details
2. User enters credit card details in to the form and hits enter.
3. The browser resolves the IPs for domain api.creditmerchant.net and adds static routes for those IPs pointing to example.com
4. The browser initiates SSL connection to api.creditmerchant.net using example.com as a static route
5. The browser verifies the authenticity of the certificate and chooses ciphers for the conversation
6. The credit card details are encrypted and transmitted to api.creditmerchant.net without being exposed to example.com
7. The purchase is complete and example.com is not in scope for PCI.
*. If the example.com is set as the static route for any unknown dependencies, the traffic is null routed
But then, what exactly was won? Seems to me, the request to api.creditmerchant.net is still a mostly ordinary cross-site requests, except the packets are routed through example.com - but example.com can't do anything with the packets, because they are encrypted.
So api.creditmerchant.net could still inject all kinds of malicious or buggy scripts and example.com can't do anything about it.
The one difference I see is that api.creditmerchant.net could restrict it's endpoints to only accept packets from example.com addresses - as they should never be called directly from a browser. This sounds like some ad-hoc CSRF protection. Was that the intention?
Yes I think that was the intention. You reduce the attack surface for injection attacks because the injected code must route all requests through example.com. Example.com will implicitly trust api.creditmerchant.net as it does today, but that dependency once loaded won’t be able to make requests to untrusted resources categorically because it must route thru example.com.
Seems that is exactly the difference between "lax" and "strict" settings: "strict" will strip cookies from all cross-site requests, "lax" will keep them for GETs. The default setting will be "lax", so following links would be fine.
I think the "strict" setting is for the proposed "two sessions cookies" design pattern: You have a basic "I'm logged in" cookie with SameSite=lax and a second "I can do things" cookie with SameSite=strict. The first cookie gives you read access to your account and tracks your session - however, to perform any actions on your account, you need both cookies.
If you e.g. clicked on a link to facebook.com and had previously logged in, the browser will send the "lax" cookie with the request, so that your session is correctly tracked and Facebook shows you the logged-in view.
However, the "strict" cookie will NOT be sent, so even if by some accident you have a state-changing endpoint that accepts GETs, an attacker could still not trick you into invoking it via a link.
I agree though that if you have state-changing GETs, you'll have bigger problems, so the use-case for "strict" cookies seems a bit of a niche. I suspect the common case will be "lax" everywhere and "none" for special applications - i.e. what the default is encouraging.
This is good news I think. Devs often just don’t know or pay attention to stuff like XSRF in my experience as a professional developer.
It’s not something you notice until your site gets hacked. Or if you do notice it, it’s in a security audit at the end of development and the ticket to fix it might get buried in a backlog below the features and bugs users actually see and want.
Having the protection on by default is the only way to solve this for good, it’s how cookies always should have been.
Turning cookies to same site by default will definitely break a lot of things though. I implemented SameSite cookie functionality in a library at work and we had several issues with it breaking stuff and confusing people when they updated to the new secure version of the library.
That doesn’t really fit in with modern web interfaces though. REST and GraphQL are both stateless architectures. Tracking and mutating a state on every request doesn’t fit very well with with that.
Depending on exactly what we're doing, we can work around this with crypto, right?
A token can securely prove that it was issued by the server/service, and under what conditions, without the server/service statefully tracking the token after issuing it.
I know I'm not the first to think of this, but I'm not sure how widely used this sort of technique is in practice.
That’s exactly how authentication JWTs work. They work differently from CSRF tokens though, because the CSRF token is generated for each request, not each ‘session’ (so you need much more complex server-side state management). But that said CSRF isn’t particularly relevant if you’re using a JWT in the Authorization header. CSRF attacks the fact that browsers will use cookies to automatically authenticate requests. If you’re not using cookies for authentication, and adding a JWT to the headers instead, then that automatic authentication doesn’t occur. Correct me if I’m wrong, but I’m not aware of any CSRF attack that targets Authorization headers, I believe any attack that does is just an XSS.
XSS is a problem you have to solve no matter where you store your authentication material. If you have an XSS, then an attacker can do anything they want on your web pages. I’m not sure you’re gaining anything by saying “anything but steal the auth token”. If you store it as a cookie, now you have to solve XSS and CSRF, so I’d say that makes your attack surface broader. Especially considering front end frameworks have become very good at preventing XSS, and can’t do anything about CSRF.
When you send a PATCH request, and include a token of the object you're patching, so the system knows your using an up-to-date object? In that REST like example that hash/token on the object is the same function as CSRF-token. I see this model on APIs daily.
This is what cookies are for. No one will be using CSRF tokens once cookies are fixed (e.g. SameSite is widely supported). "Just passing a state-key around" is not mutually exclusive with "being an ugly hack".
Has there ever been a breaking change approved by the IETF? I'm not familiar enough with their history to know of precedence. It seems like the spec/standard should be approved at least before we go about asking developers to incur hundreds of millions of dollars in updates so their sites continue working in Chrome.
Ironically, this breaks CSRF protection in OIDC authentication systems (except Google, since they don't implement the form_post standard).
I agree with the change in theory (although as a side-note I wish users had more control over cookie/request headers, and it wasn't just site-operators that could set policies). It's a reasonably obvious oversight that cookies aren't same-site by default, and the thoughts behind this seem pretty solid. There's even a good point that I've seen made that secure-by-default might give browsers/blockers a useful metric for identifying likely tracking cookies.
I have no objections on that front.
I wish the Chromium team was working more with other browsers to make this a more coordinated change. At the same time, I don't think that there would be significantly less breakage if it was. No matter what, this is 100% going to break websites -- there is just no way to roll out a change like this without disrupting operations. It's gonna be a mess, and it would still be a mess even if all the browsers rolled out this change together.
It feels kind of like the JS ``typeof null === "object"`` stuff. Everyone agrees the existing behavior is wrong, but we're not sure when and how to fix it.
So I'm a little conflicted. I understand completely why the Chromium team wants this, and I also understand why some people are going to be upset about it. Blocking CSRF by default in a browser is really good. We'd also really like to avoid breaking the existing web.
What we really want is a failsafe way of setting cookies: i.e. a way of setting cookies that won't work on browsers without SameSite support. Otherwise this solution will still leave a security hole for the many users not on (the latest version of) Chrome.
Ideally we could do this by intentionally making the SameSite cookie syntax non-backwards-compatible.
If there were an easy, alternative secure solution then developers would already be using it.
Implementing common anti-XSRF mechanisms like a session token in a form field or extra http header requires modifying every place your app communicates with the server. It’s anything but easy which is why so many apps/websites not built with XSRF in mind still have vulnerabilities.
But any XSS would give access to your authentication token, this is why you should never store it in local storage. Cookies have the httponly flag that prevents javascript from accessing the cookie in case of XSS.
HttpOnly doesn't really do much to stop an attacker that already has XSS. The attacker just makes the XSS perform the interactions they want directly instead of bothering to steal the cookie.
I would love to move away from cookies. They're included in every single network request which really bloats things up. By passing along data manually we could send only what is needed.
> Could we just avoid cookies altogether, and store session info in local storage?
Several possible issues there:
- If the session is large, it eats space on the user's machine and bandwidth in requests
- The session can't be shared across devices
- Security concerns. You don't want to trust the user to tell you what their current state is - especially if it's "I have this much money in my account" and the like. Even if you encrypted the data, they could resend the same state at a later time - "oh look, I have a full wallet again!"
You're much safer if all the user sends is "here's who I am" and every bit of associated information is under your control server-side.
But as you say, that would require JavaScript to be available, making it impossible to build web apps that store user state without relying on JS (well, I guess there’s HTTP Basic Auth) and breaking a large part of the internet in the process. Not everything is a client-side single-page web app.
I know what you're saying, but it's 2019 and JavaScript is everywhere. I think most sites rely on at least some JavaScript to provide a reasonable experience for users. Do you have any examples of popular web apps that don't use js?
I've been getting more aggressive about disabling Javascript by default, and for all that people complain, I've been pleasantly surprised how many sites still mostly work without it (some news sites work even better). A larger portion of the web than you might expect still respects the division between content, styling, and functionality.
Hackernews and Reddit (at least the old version) are the two examples that spring to the top of my mind as sites that I sometimes log into without Javascript. Hackernews works so well that I sometimes don't realize I have Javascript turned off. A lot of forums fall into that category.
I'm certainly not anti-JS, I like the language quite a bit and often stick up for it when it gets bashed on HN. But the ability to fall back on non-JS solutions to some problems is an important part of the web, and I wouldn't like to see it disappear. Particularly while we're in the middle of a fight over user-tracking. I think it's important to support graceful fallbacks, and cookies are a pretty good way of doing that.
> I've been getting more aggressive about disabling Javascript by default
What is the opposition to javascript exactly? Is it a privacy matter, you want to block all ad trackers? Do ad blocker plugins not suffice? Or are you concerned about security vulnerabilities with javascript? Or is there something else I'm not getting?
It's mostly privacy. It's a very small amount about security. It's a little bit about data-usage and a few other annoyances.
Where privacy is concerned, we're currently losing the war on fingerprinting. I don't think we're going to lose the war on fingerprinting in the long run, but there's just a lot of stuff we need to do with the language and it's going to take a little while before we get to a point where I feel comfortable saying that arbitrary Javascript can't identify my computer. It's just something we ignored for a long time and we have a long ways to go.
UBlock Origin is really good, but privacy is a continuum. So for a non-technical person, I'd install UBlock Origin and call it a day. For someone who's familiar with the web, I'd install UMatrix with the default settings. For someone who's really familiar with the web, and who really wants to be safe, I'd install UMatrix and switch a few default settings (disabling cookies by default and disabling Javascript by default).
Each step there will make you slightly safer, depending on what percentage of malicious code you want to block. Sometimes trackers are served as 1st-party requests.
I don't have any opposition to Javascript in general; there are more than a few native apps that I wish were just web apps, because the web a better sandbox (and frankly a better platform) than most native environments. It's just a little complicated because we're currently in the middle of a fight over how the web should work.
So it's not an indefinite, "no web-code ever" position. It's "be more careful than usual, because an abnormally high number of bad actors are focused on this platform, and not everything is safe-by-default." Ignoring the debate over site-breakage, the changes here around CSRF should be a decent step in that direction.
On a less practical note, it's also because I can. I really like Javascript, a lot. I also really like separation of concerns, and I think the separation between content and functionality is a really good architecture decision that people should pay attention to. On a purely aesthetic, emotional level I like that I can load a page without executing JS. Heck, occasionally I'll even turn off CSS. There's very little practical reason for that, other than a kind of irrational, "I like that the web lets me do this, most other platforms don't, and it makes me happy to remind myself I can."
But I would guess most people don't fall into that category, that's probably just me being weird.
Oh, I agree that just about all apps use JavaScript to some degree, but the question is whether that’s for progressive enhancement or as a fundamental requirement. Wikipedia for example works fine without JS (including editing). Pinboard is one of my favourite apps and uses hardly any JS. That one’s perhaps a bit niche, but hell, even Gmail and Facebook still maintain basic “HTML versions” that deliver reasonable (some might say better) user experiences without JavaScript.
I build single-page web apps for a living, so I’m by no means opposed to JavaScript, but I think the web would be worse off if browsers imposed it as a requirement.
It is more work to emulate what you get for free by using cookies. You need to handle expiration etc yourself. Plus, you don’t have the protection of http only cookies if you go that route.
Forgive my ignorance but from what I read making SameSite=Lax the default is a chrome only thing at this time right? If so,how is CSRF dead? Are all major browsers following suit?
I wouldn't say CSRF is dead yet. Chrome is just one browser, until all major browsers have implemented this (and IE11 is still kicking), you still need to implement CSRF mitigations as well.
As usual, supported almost everywhere, apart from the any IE that's not the latest, running on an updated windows 10. So we can't really use it unless we want to leave lots of users insecure.
^ This. Last I checked it was not supported w/ IE11 on Windows 7.
I have no idea why samesite capability would ever depend on the host OS. Shame on Microsoft for not supporting a somewhat simple to implement security feature on older (but supported) operating systems.
"lots of users", unless you are working with enterprise software I wouldn't call IE lots of users. Just display a banner for IE users informing them that using the site with IE is insecure and suggest alternatives.
In enterprise IE may be the majority, but for common users IE is still often the default. Basically if you don't target tech crowd, IE users are your users. A website I manage (targeted for location only) has 20% IE traffic on desktop.
Because strict would break how people expect the internet to work.
Any links from a third party website to a site where the session cookie is set with SameSite strict, clicking those links will not include the session cookie for that site.
For example, if GitHub implemented SameSite strict for its session cookie and you clicked a link on a site that took you to GitHub, it would not send the cookie for GitHub and it will look like you're not logged in on GitHub, even though if you opened a new tab and went to GitHub you would be logged in.