> Microsoft’s security scanning will now not only visit the links you mail out, they will also run the JavaScript on your page, and will also send out any POSTs generated by that JavaScript:
It seems to me that someone misunderstood how to implement the logic on the frontend here. You're not supposed to send off a POST request directly as the page has been loaded to exchange the tokens, but require the user to click on a button to confirm the activity.
So the flow would be:
- "Send email with url+token > Frontend loads, shows button to user to confirm > User clicks button and exchange happens".
Instead, wrong implementations do something like this:
- "Send email with url+token > Frontend loads and exchange happens", which we've known for years to not work correctly, because of this very issue.
Discourse for example implemented this perfectly, so if you need an example, that would be where I'd go.
(I know it sucks that scrapers/crawlers do a lot of horrible shit, it truly does. But we've dealt with this for years now, it's not a new thing and it won't be the last shitty thing they do)
> You're not supposed to send off a POST request directly as the page has been loaded to exchange the tokens, but require the user to click on a button to confirm the activity.
Since requiring the user to do anything drastically reduces the go-through rate, the current "standard" is to use either a meta-refresh tag, a HTTP Refresh header [1], setting location.href from JS or a <form> that gets submitted via JavaScript, and a <button> inside a <noscript> for those that don't do JS.
Used to be enough but nowadays content scanners run legitimate chromium embedded and whatnot, which obviously trips over at least the refresh tag/header and the automated Javascript solutions. And that's the problem.
Which again, it's a known problem, existed for many years already and we have a work around for. If you want to do POST requests from URLs sent via email/other communication platforms, you need to make it so the user confirms the activity.
Users who wanted to sign in and have to press a button after loading the page instead of just logging them in won't suddenly not want to login. They have a clear purpose when clicking the link, regardless if they have to click one button afterwards or not.
And besides, if you don't implement it like that, you'll have even more users dropping off because they literally won't be able to sign in at all, no matter how badly they want to.
Won't this shift the meta for malware to a "button" that is 99.9% transparent and overlays the whole page in the hope that someone will accidentally click on it when trying to select text or something? A lot better than 0-click ig but might catch people now and then.
Both Microsoft and Google been doing this in emails for many years. I linked elsewhere a SO question from 7 years ago talking about this issue for gmail, others mentioned some Microsoft stuff there too.
Huh? This has nothing to do with the unsubscribe-header thingy. This is about when you for example provide a URL that contains a token in an email, which you exchange for an authentication token once the user visits the URL. Some people implement that page to automatically do the exchange on page load, instead of waiting for the user to click on a button.
Microsoft's Office365 nicely clicks the links for you. Had this issue with Webhelp's emails where Microsoft was nicely cancelling the receipients transport by clicking on every Cancel link on the page and clicking on the javascript confirm link to cancel the transport within 20 seconds of their staff booking transport. Quite annoying. Ended up having to block large parts of Microsoft's IP ranges.
I'm not sure why people believe this to be a new issue, I think the first time I myself implemented a work around for this issue was like 4 years ago, and here is a Stackoverflow Question about it from 7 years ago: https://stackoverflow.com/questions/43443947/how-to-stop-e-m...
Outlook/Microsoft is not the only ones who do this either, when I first had to work around it myself it was because it was happening in gmail if I remember correctly.
Accessing links (GET), yes, executing JS and POSTing is the new thing. Putting an XHR in DOMContentLoaded was the workaround for the behavior you are talking about.
what if microsoft decides they not only want to load the URL in messages, but also click links and click buttons? presumably this is to detect bad/dangerous content. bad people will also just put that dangerous stuff behind a link or button if it's that easy to evade their checking.
or why exactly are they following those links again? perhaps it is for previews instead of "security"?
from the article:
> Over time, it also became OK for software to visit links in email to find out what was behind them.
why would that be OK? if i email a secret link to someone, i fully expect it to stay between the recipient and me. not some company reading along. but that's why i don't have accounts with these types of companies...
If a page contains JavaScript that makes a POST request when the page loads, then it’s the developers of that page that are violating HTTP norms, not the developers of software that loads such pages. POSTs are unsafe requests that should be made as part of a user’s intent. Following a link should always be safe. Microsoft aren’t responsible for this problem – the site developers are.
We already went through this with Google Web Accelerator and 37Signals twice. They couldn’t accept that mere links were a bad idea for deleting things, and GWA came along and deleted their users’ data by following links. They tried to detect and block GWA instead of following HTTP rules and it happened again.
Links are supposed to be safe. Ignore this at your peril.
Microsoft making this change also rather clearly indicates that malicious actors are actively abusing these spec violations (by walling their phishing pages behind a simple POST on page load).
Clicking a link indicates the user wants to see something. Clicking a button indicates the user wants to do something.
Links shouldn’t change state. This has been an extremely strong convention for several decades. If you write code that changes state when a page is loaded, you are breaking this convention.
Microsoft knows what they're doing. The problem is users dont. I don't know what you care about more, your users convenience, or their safety and security. But no chance in hell I'd allow MS to dictate the terms of the security of my services.
I'd solve this by warning users who attempt to reuse consumed links that the link has been used already, and likely list broken email providers that I suspect are using software that breaks this. And then wait. Let them complain to their IT, or whomever is responsible for picking broken software.
If I was feeling really petty, the clown that wrote this link scanning software isn't in charge of the networking, so you could eventually enumerate the subnets that MS uses, and black list them. Then the links might actually get to users connecting from non MS data centers.
You could roll over and decrease your security so that MS can claim they increased theirs... But you shouldn't! They can only get away with turning security into a negative sum game if you play along. Please don't?
> I'd solve this by warning users who attempt to reuse consumed links that the link has been used already, and likely list broken email providers that I suspect are using software that breaks this. And then wait. Let them complain to their IT, or whomever is responsible for picking broken software.
The thing is, most websites already know how to workaround that particular issue (https://news.ycombinator.com/item?id=42804447), so when users notice that something breaks with just your service and not everyone elses, you can say "Outlook is shit" all you want, the users will blame you for it, since you weren't able to fix what others could, for better or worse.
> You could roll over and decrease your security so that MS can claim they increased theirs
What part of your security would decrease if you "roll over" and what exactly does "roll over" entail here? Websites who've dealt with this issue implemented what I linked to above, they're no more/less secure than the website who didn't manage to work around this issue, they're just less buggy for Outlook users, if anything.
Your description isn't a work around, it's the exact same buggy behavior, just implemented in JS, instead of a malware scanner. GET or POST is less important than user action, if your code consumes a token without user action, it doesn't matter if that's server side or client side, GET or POST.
If outlook is clicking on links or buttons that generate POST requests, then outlook is broken. If outlook is loading a page, and the page without further interaction is sending a POST, that's just as bad, if not worse than expiring the token on that same GET.
> If outlook is clicking on links or buttons that generate POST requests
It isn't though, it's loading URLs it finds in emails, and then runs scripts on those loaded websites, like many "modern" scrapers/crawlers.
So if your JavaScript on that URL automatically does POST requests on load, those will happen automatically when their scanner loads the website.
But if you instead don't do a POST request on JavaScript load, and do that request when the user presses an actual button on the loaded website, the scanner won't trigger that button automatically (that would be a whole new level of craziness, we're not there (yet))
So yes, it is a workaround. I've implemented this 10s of times already, because it's been an issue for a long time.
Edit: The submission article also makes it abundantly clear that this is the very problem they're suffering from:
> Microsoft’s security scanning will now not only visit the links you mail out, they will also run the JavaScript on your page, and will also send out any POSTs generated by that JavaScript:
The problem is that their JavaScript automatically issues POST request on load, not that Outlook loads and executes JavaScript on the page (although that sucks too, don't get me wrong).
My point was more, sending a POST request IFF a user interacts isn't a work around, but the expected and desired behavior. It's a bug otherwise. Fixing a bug shouldn't be called a work around. A work around is when you can't fix a bug somewhere else.
> you can say "Outlook is shit" all you want, the users will blame you for it, since you weren't able to fix what others could, for better or worse.
Personally, I'd happily blame Microsoft, but the practical significance of this fact is nil.
I am in the very small minority of users who will sometimes complain to our IT when Microsoft forces shit down our throats. But I do this because it feels good, not because I imagine any realistic probability that doing so will make a difference.
That, is only your problem if you make it your problem. Ideally, you wouldn't. But like I already said, it depends on if you're willing to sacrifice the security of all your users because, money I guess?
Eventually you have to choose doing what's right, and doing what's expedient. I can't draw your line for you. Thankfully for me I've always been able to always draw mine at protecting users from abuse.
Users care much more about convenience than security. If you made a phone that wiped itself after 3 incorrect pin attempts then you'd have a lot of very angry users wearing gloves. "It's for your own security!" wouldn't appease them.
It's more like if someone tried to delete/hack your account but you send a confirmation email to protect such a thing, and Microsoft clicked on the accept & delete account button in that email. So you deleted the user's account.
Do you know what users like less than inconvenient security? It's when "security" deletes all of their important data.
Again, still have to draw the line of what's acceptable somewhere. All I'm saying is you don't have to ruin your software because someone else is an idiot. Don't punish everyone because it makes the problem somebody else's to fix... That's what Microsoft is doing here, and it's obviously bad.
> It's more like if someone tried to delete/hack your account but you send a confirmation email to protect such a thing, and Microsoft clicked on the accept & delete account button in that email. So you deleted the user's account.
And then the user says "why did you delete my account?" and you say "well I chose not to support outlook but I didn't tell you that at the start" and the user says "that's stupid, everyone uses outlook" and leaves a bad review
It is not my job to tell Microsoft to correct their behavior and follow standards.
There are standards for a reason [1] and if you break them, no matter how good your intentions you are in the wrong because you've changed the expected behavior. Full stop.
The Internet works because everyone has agreed to follow standards. If Google woke up one day and decided that every IP address that ended in an odd number would receive a captcha every time they searched people would understandably get pissed off. Well ISPs have thousands of IP addresses so for the convenience of the user it's the ISPs that need to assign their users IP addresses ending in an even number so that they can search without captchas, right? No!
Same thing here. Just because Microsoft and Google benefit from economies of scale and have many users does not give them a pass to break standards whenever they see fit. There is a reason why we have RFCs and mailing lists to have these sorts of discussions.
I'm not arguing that it's good that MS is doing something insane, I'm arguing that you can't take the moral high ground and act as if they're not doing that insane thing.
There may be standards for the internet but people do not implement them correctly or consistently and the internet works because everyone adds workarounds for everyone else until things basically kinda work out.
> and Microsoft clicked on the accept & delete account button
If that happened, could one plausibly make a legal case that Microsoft is "hacking" the sites/users by stealing sensitive security info and making unauthorized actions?
But if we asked users "Choose one: the ideal convenience of being able to log in with just your username (but anyone who knows your username can login as you), or the inconvenience of having to enter username plus a secret password?" almost all users would choose the security over convenience, because they would understand the risk/reward. I think users care more about convenience than _theoretical_ security, and that we owe them education on how security impacts them directly.
What prevents bad actors from detecting that the request is coming from MS and returning pictures of kittens, but the most wicked malware otherwise? You don’t actually have to answer.
You need to load the URL, if you want to check if a fake Google login page shows up or something like that.
And the phishers are trying to evade your URL scanner. If your URL scanner has an identifiable user-agent, or doesn't execute javascript, or there's anything else that makes it identifiable, they'll show a boring legitimate page to your scanner and only phish the real users.
As I understand it, self-serve ad networks have similar challenges detecting ads placed by scammers.
Wait, from reading the article it looks like the link they send to their users send them to a page with some javascript which automatically POST data for them. Which MS will correctly do: your "POST" is in fact a simple GET for most users with javascript on.
What you should do is send your user to a page with a form requiring a manual action from them.
This just highlights how the security scanning is just theater though. So bad actors can now just have their evil content behind a form, and users get accustomed to the double opt-in workflow anyway.
Yup, best thing would be to either have a form in your email / SMS or let's get crazy: implement POST links (and maybe DELETE ones too). Your client knows it will change things when followed (so security clients and prefetchers won't open it), the server receive a POST request, and it's a one click action for the user.
Lots of things consume URL’s other than security scanners in similar ways - as smarter people than me have commented, the solution most use (and I did recently myself) is to make the signup/signin link direct to a frontend page which makes them click to confirm. Yea, that still sucks, but you can have fun with things like this - one of my favorite things to do to figure out if an employer is snooping on my private messages in an app like say, slack, is to post a honeypot link in a DM to myself - on slack at least, it will consume the link every time the message thread is opened. So if I know I didnt open it, and I get an alert, I know someone or something has read the message.
There’s tons of stuff out there like this, just assume apps are being disrespectful and plan accordingly.
1. Most “safe browsing” browser features do this to some extent
2. Any browser or mail extension with access to the URL might also do this
(1) and (2) might be done by remote servers, even in remote countries.
This is one of the reasons why you should avoid magic links (like login links or bearer tokens like S3 signed URLs), because you may inadvertently be handing them to other parties.
Yep, along with many “magic link” emails now going to a stub form where you click a button to continue via POST request, instead of going direct to the target resource. Not that this is effective if the scanners are now submitting POST requests as well as following simple links.
Though also the “I'm sure I want to unsubscribe” is in part to deliberately add extra steps to unsubscribing.
Not sure what time-frames we're talking about here, but I remember having to work around this particular issue more than 4 years ago, I'm sure it was also an issue before too. It's been like this for a long time.
I've been around long enough that “now” and “recently” is in comparison to times before 2015. Sometimes before 2000…
Also, a lot of advice and guidelines you see about email haven't been updated since somewhere between 2000 and 2015, even when they carry more recent dates, so are significantly incorrect.
> I've been around long enough that “now” and “recently” is in comparison to times before 2015. Sometimes before 2000…
Alright, everyone sees things differently :) I probably programmed professionally since 2012, but I don't think I'll call anything after ~2020 "recent" or happening "now".
"We may wonder also what fun a determined hacker could have with Microsoft running random bits of JavaScript on their servers and allowing these to talk to the world (or to Microsoft itself even)."
So maybe there might be a way to disincentivize this behavior?
This already happened when you had a link in for example GMail in iOS, it wanted to open that login link inside its own browser popup (shame) - thus using up the 1x login at once. I jumped out to the Safari browser, my choice, but the login link had been expired by the GMail window.
I wish for a harsh, comprehensive enforcement of Schrems II sometime soon. Sadly it looks like the EU will instead be bullied into making exceptions to it's own basic data protection regulations and other IT-related laws.
Executing JavaScript on random pages seems like quite a bad idea, spammers could potentially include links to JavaScript which does resource intensive things, like that small and sketchy trend of including Bitcoin miners in websites.
An E2E email client that would do that scanning client-side instead of remotely (like today), would surface exactly the same issue if it goes out and loads up websites for you.
"Some might argue that these confirmation / single-use sign-on links are no good anyhow, but for now, they are what we have to get people to sign up for services."
Some users might argue that if they don't "sign up for services", i.e., use other peoples' computers for basic tasks instead of using their own, then many problems caused by so-called "tech" companies never happen to them. That is, "signing up for services" introduces new problems.
On our company Slack, we literally had a discussion about this being a problem yesterday. I frankly did not believe it.
Security scanners really should not be clicking on things on your behalf. There's a lot worse danger there, like potentially agreeing to things you would never have agreed to.
> There's a lot worse danger there, like potentially agreeing to things you would never have agreed to.
If it doesn't have my hand-written signature then it's not an agreement.
I can't "agree" to something I haven't read, haven't been notified about, and haven't agreed to.
A robot clicking things does not mean I agreed to whatever it clicked it. It means that the the other end of the transaction can't tell the difference between me and a robot.
If it doesn't have my hand-written signature then it's not an agreement.
I wouldn't bet my company on that.
If someone sends us an email - probably with legal weasel words included saying it's only for use by the intended recipient - and asks for confirmation of something by following a private link and pressing a confirm button and if as far as that supplier is aware we have then done that and continued accordingly then I would not be at all surprised if a court found that they were acting in good faith and we were not.
Similarly if we sent something to a customer and their own IT system actively simulated a sequence of actions the intended recipient would take to confirm something then I would hope that would stand up in court as well. Otherwise we enter a legal climate where no-one in business can ever say or do any little thing without some kind of verified human approval process. It's hard to imagine how annoying and inefficient that might become for everyone. Maybe it would prompt a new generation of electronic communications and record-keeping where authenticity was built in unlike many of today's most common technologies - but it would still be a nightmare to do business until that happened.
> If someone sends us an email - probably with legal weasel words included saying it's only for use by the intended recipient - and asks for confirmation of something by following a private link and pressing a confirm button and if as far as that supplier is aware we have then done that and continued accordingly then I would not be at all surprised if a court found that they were acting in good faith and we were not.
Yes exactly. Given the escalating sophistication of malicious actors and their robots, the scenario you present is viable today. That's dangerous in ways that I cannot even begin to articulate, and I'm not even well-versed in contract law.
> if we sent something to a customer and their own IT system actively simulated a sequence of actions the intended recipient would take to confirm something then I would hope that would stand up in court as well.
I absolutely hope that would not stand up in court.
To my knowledge a contract requires (at a minimum) a meeting of minds and consideration. You cannot agree to something you did not know about. You cannot come to a meeting of minds if you weren't told about it. That payment could be for anything. You might think it's for this one service and I might think it's for all services provided, not just the one I clicked on. Who will a court side with if you cannot prove that I, a human, agreed to it?
> Otherwise we enter a legal climate where no-one in business can ever say or do any little thing without some kind of verified human approval process.
Yup! And I say that's a good thing given that non-humans cannot enter into agreements, and there are plenty of non-humans who have no idea what they're getting themselves into in the current form of electronic agreements.
> It's hard to imagine how annoying and inefficient that might become for everyone.
You mean... you might have to actually employ people to verify that your customers are actual humans? That's a good thing all around. Unless you can't afford to employ people, in which case: your business does not have a valid business model.
> To my knowledge a contract requires (at a minimum) a meeting of minds and consideration. You cannot agree to something you did not know about.
If you use software that auto-accepts everything sent to it, it may not create a contract, but that doesn't necessarily mean it doesn't create a prima facie reasonable expectation by the other party that you agreed to a contract. Especially if the software you intentionally set up is specifically designed to simulate a human action like clicking a button.
If that has any negative consequences for the other party, you could be on on the hook for various kinds of negligence or fraud. You might even find out you did have a contract. A corporation can be held to a contract if it's accepted by an employee who a reasonable counterparty would have expected to have the authority to accept it... even if that employee was specifically told not to accept it and therefore did not have that authority. Computers aren't the only things that can go wrong.
Try auto-submitting a bunch of Amazon orders and refusing to pay.
It probably doesn't apply for this particular email nonsense, because if you're just using some email provider that practically everybody uses, a court's naturally going to be inclined to say that you've met the ordinary standard of care. I mean, I would say it's negligent and stupid to use any of the big email providers, but I'm not in charge.
The people who should be in actual trouble would be Microsoft. But of course that's heretical and won't happen.
> And I say that's a good thing given that non-humans cannot enter into agreements, and there are plenty of non-humans who have no idea what they're getting themselves into in the current form of electronic agreements.
Billions of dollars worth of securities trading happens per day without human approval. I wouldn't be surprised if it's actually trillions. Nobody gets out of those contracts if they intentionally set up software to trade. Not even if their software has horrible bugs and submits orders they'd never have approved manually.
> If it doesn't have my hand-written signature then it's not an agreement.
Fortunately, many courts do not agree, given that lot of disabled folks would be SOL under the "physical ink signature only" theory of agreeing to contracts. Not to mention a bunch of e-commerce considerations.
> can't "agree" to something I haven't read
I realize that is part of an ANDed condition, but it's absolutely possible for people to agree--in very definitive ways--to something they didn't read.
The alternative leads to: "Judges hate this one weird trick! Just say you didn't actually read it!"
Yeah, when our security team does phishing audits, clicking a link in the email without doing anything else is usually considered a soft failure. I guess this would cause everyone to fail and they'd have to ignore that result.
I guess that in some cases you can host the service on the tested company's Intranet, so MS servers wont be able to connect to it (but nor would people in home office without VPN).
I was thinking they meant some cloud office 365 thing, so not the local computer.
Regardless though, once you have a web browser 0-day its usually not very hard to convince a user to click on a link. Especially for a targeted attack.
Yeah, I thought it was running locally but it runs in Microsofts cloud.
However, that opens up ANOTHER similar attack vector as Bert is saying:
"We may wonder also what fun a determined hacker could have with Microsoft running random bits of JavaScript on their servers and allowing these to talk to the world (or to Microsoft itself even)."
Frankly, I'd prefer the service to include a one-time code instead of a link. Especially since links are often a "to verify your email click here", with "click here" being an Amazon SES, SendGrid or similar tracking URLs, or a super-long URL I can only read, uncomfortably, on the tooltip that my email client shows; or by long-pressing it on my phone.
This applies also to the (really annoying) MFA emails (and we really should stop doing this shit, and instead support at the very least TOTP), "magic" links and similar, who also require you to visit the link from the same device, when perhaps I am on a public computer without my email signed in; and getting into gmail or outlook to find your f-ing link would be annoying, because I have 2FA there too.
Instead, just send me a code. It can even be in the subject field, like "012345 is your login code for WHATEVER", and I'll see it at a glance on my notifications. Then I'll input it on the device where I'm logging in (such as the public desktop computer) and go on with my day.
“Microsoft”, huh? It would be kind of helpful if the author would bother mentioning what particular Microsoft product/service he is talking about and on what platform rather than just a vague reference to a “security scanner”. That’s what makes the difference between actionable content that could help affected people vs “boo evil Microsoft bad, Linux rocks” like it’s still the 1990s.
It happens in a couple of places, Teams, Outlook, and Exchange Online come to mind but it wouldn’t surprise me if it’s in other places like OneDrive links or Word documents.
It seems to me that someone misunderstood how to implement the logic on the frontend here. You're not supposed to send off a POST request directly as the page has been loaded to exchange the tokens, but require the user to click on a button to confirm the activity.
So the flow would be:
- "Send email with url+token > Frontend loads, shows button to user to confirm > User clicks button and exchange happens".
Instead, wrong implementations do something like this:
- "Send email with url+token > Frontend loads and exchange happens", which we've known for years to not work correctly, because of this very issue.
Discourse for example implemented this perfectly, so if you need an example, that would be where I'd go.
(I know it sucks that scrapers/crawlers do a lot of horrible shit, it truly does. But we've dealt with this for years now, it's not a new thing and it won't be the last shitty thing they do)