A family friend of ours recently fell victim to a phishing attack perpetrated by an attacker who paid for Google Ads for a search term like "BANKNAME login". The site was an immaculate knock off, with a replay attack in the background. She entered her 2fa code from the app on her phone but the interface rejected the code and asked her for another one. In the background, this 2nd code was actually to authorise the addition of a new "pay anyone" payee, and with that her money was gone[0].
I have accounts with 2 banks, one uses SMS 2fa and the other uses an app which generates a token. I had thought that the app was by default a better choice because of the inherent lack of security in SMS as a protcol BUT in the above attack the bank that sends the SMS would have been better because they send a different message when you're doing a transfer to a new payee than when you're logging in.
So really the ideal is not just having an app that generates a token but one that generates a specific type of token depending on what type of transaction you're performing and won't accept, for example, a login token when adding a new payee. I haven't seen any bank with that level of 2fa yet, has anyone else?
I guess perhaps passkeys make this obsolete anyway since it establishes a local physical connection to a piece of hardware.
[0] Ron Howard voice: "she eventually got it back"
Turns out ads aren't just annoying little acts of psychological terrorism that eat up a lot of bandwidth and computing power, they are also the #1 vector for spreading scams and malware on the web.
In other words: If you're trying to improve your security posture, installing an ad-blocker is one of the best things you can do. If you have less tech-savvy friends and relatives, I would strongly recommend setting up uBlock Origin for them.
Why isn't there any market fulfillment for "safe, non-intrusive ads", on the part of a vendor? Is it because it's not possible, or not worth the overhead either because of cost or no effect on consumer behavior/blocking?
This seems like it ought to be low-hanging fruit. I would have less aversion to clicking on ads if I did not default to it being a security risk.
Now admittedly the Chrome one is a bit flashier. Although I haven't exhaustively gone through every homepage variant before Chrome, so it's possible there was something as flashy before Chrome as well.
Ah, either the author of the article I'm dimly remembering was mistaken, or, much more likely, they correctly inserted some caveat that made the claim true, and the precise caveat was just lost to my faulty memory and the mists of time. I didn't bother trying to check the Wayback Machine because, for some reason, I was convinced that Google would have requested that the home page not be crawled; thank you for doing it!
Intrusive ads are more profitable for the ad company, while the costs are largely born by other parties. A strategy to privatize the gains and socialize the costs is common in a lot of sleazy industries.
There is zero reason for ad companies or ad networks to be covered by any safe harbor provisions of the law. They should have 100% criminal liability for every mal-advertisement they send to a user.
Ads are a paid transaction and Ad Companies absolutely need to be held liable for the money that they take because of who they take it from voluntarily. Google should be ashamed at all the money they are making from scammers and criminals and other evils. They should have a terrible score at every agency remotely like the Better Business Bureau. They should be tarred and feathered in public opinion. The brand name should already be tarnished by all this Evil across too many years of negligence. Same goes for Meta/Facebook, though they do have some of the tarnish already, more than Google has managed to get to stick. (I think too many people still want to believe the "Do No Evil" lie and its lasting brand propaganda.) Other companies should be wary of working with Google because of that bad reputation. ("No, we won't be using GCP because Google does too much business with criminals.")
Yes, it is hard to scale Terms of Service enforcement. Yes it is a hard problem to solve finding bad actors at scale. That shouldn't be a free pass to just not do it at all. Especially when money is changing hands. If someone is paying you to be a bad actor they are either paying you to look the other way (called a "bribe" in most jurisdictions, and illegal in some of them) or you aren't doing due diligence before accepting bad money (called things like "laundering" and "embezzlement" at scale). "It's hard to scale" doesn't sound like a good excuse to do financial crimes, last I checked with banking regulators and is in fact the opposite (a larger crime); why should Google or Meta get a free pass in advertising because they don't want to put the work in and take the revenue hit?
> Yes, it is hard to scale Terms of Service enforcement. Yes it is a hard problem to solve finding bad actors at scale. That shouldn't be a free pass to just not do it at all.
What evidence do you have that they are "not doing it at all"?
They don't. DMCA safe-harbor covers copyright violations. All it takes is a prosecutor willing to use the CFAA to hold business as accountable as people.
It doesn't seem to be profitable, in part because the internet now consists of mega-sites and if your network doesn't serve ads on the mega-sites, no one is interested in your network.
Project Wonderful was a fantastic webcomic-focused ad network. From my perspective as a reader, being shown ads for other webcomics while I'm reading a webcomic was... a positive, really. A lot of webcomic artists ran Project Wonderful ads and nothing else. They shut down in part because of the rise of facebook.
most people publishing a website either cannot or do not care to host the ad server on the same domain, they just want to monetize the site.
things could get a lot better, but this self hosting suggestion in particular will never see wide adoption unless major hosting providers build it and host for their customers.
most people don't even bother to self-host/bundle stuff like their fonts and JS libraries unless they have have a JS framework in the loop doing it for them.
Pack before the web most places doing ads had them al, in house, salesmen (mostly male) design and so on., large byers (mcdonalds) might hire an agency to talk to all the little newspapers, but even the little ones did this in house.
> most people publishing a website either cannot or do not care to host the ad server on the same domain, they just want to monetize the site.
That's sort of beside the point, though. The site owner's commitment to running ads is useless unless there are people to view them, and, as long as unsafe ads are ubiquitous, the only safe advice to give to people is that they should run ad blockers everywhere. It doesn't matter that that isn't what the site owner wants to happen.
there are plenty of site owners that would voluntarily choose a more ethical ad hosting network if it was a good and easy option.
adding a pain-in-the-ass hurdle like "has to be hosted on the same domain" that 99.99% of people won't see the value of or understand is only going to hurt adoption of the better solutions.
> adding a pain-in-the-ass hurdle like "has to be hosted on the same domain" that 99.99% of people won't see the value of or understand is only going to hurt adoption of the better solutions.
Right, but that's my point—this is not a situation where visitors have to hope that site owners will be responsive to their preferences; rather, visitors are in a position to enforce their preferences via ad blockers, so there's no incentive for them to compromise on matters that, however poorly appreciated or understood, genuinely can affect security.
agree - but that gets to the larger point that mass adoption of anything like has to be fairly frictionless.
We are barely getting a third of people to use adblockers - you'd have to squeeze the ad server industry a lot more to make them change.
How to squeeze them? Get more people to use an adblocker that enforces serving from the same domain. How to get more people to use an adblocker? Make it frictionless, like enabled by default on browsers.
Then by squeezing them, they would be forced to respond by building tooling making it more frictionless to serve ads form the same domain, etc.
one suggestion more arrogant, ridiculous, and in bad faith than the last
you're now implying everyone hosting a website should pound the pavement to sell their own ads - or use a a static export from an ad network and build it into the website themselves?
Sure maybe they should but they never will. Dream on.
> Everything we tried to build for you
You are a speck of dust in the universe of computing. Get a grip.
Google’s search ads have become explicitly more intrusive and less distinguishable from the real content over time, deliberately and knowingly.
It’s funny, that while many parts of Google are making improvements to the web security ecosystem, they are completely ready to throw it out of the window when it comes to making them more money.
TIP: I sold my senior mom on uBlock Origin because YT ads are so obnoxious. The added benefits are extra security and performance improvements. She was even able to understand that if something doesn't seem to be working right (like a banking site) "turn it off and try again".
> So really the ideal is not just having an app that generates a token but one that generates a specific type of token depending on what type of transaction you're performing and won't accept, for example, a login token when adding a new payee. I haven't seen any bank with that level of 2fa yet, has anyone else?
HSBC actually has this. All of their country-specific apps allow you to generate a different security code depending on whether you want to login to the website, verify a transaction (e.g. transfer funds to payee), or re-authenticate (e.g. to change your personal info, like your phone number).
> an attacker who paid for Google Ads for a search term like "BANKNAME login"
I tried out buy Google ads once out of curiosity cause they gave me a free credit. It was crazy how many ridiculous stipulations and guidelines I had to work around before they'd accept my ad.
How are they that strict for me, but seemingly they'll sell to a phishing page that's impersonating a bank and targeting it to people searching for that bank?
Criminals are incentivized to evade detection. And you only get to observe the successful criminals and none of the unsuccessful ones. This makes it appear like the criminals are getting through the filters trivially. What you don't see is the work they are putting in to get a successful phishing ad up there.
Not to excuse failures, but there isn't a "it is easy for them but hard for me" situation.
I once tried to buy a domain which contained the word "Google" from Namecheap, but I was rejected with an error telling me that I needed to contact support and show that my use of the trademark was approved by Google. So instead I went to Google Domains and bought it from them with no issues.
Because the impersonator is probably a lot more sophisticated at this than you or I, and it's likely that 999 impersonators were rejected and this is just the 1/1000 who found a way around it.
The system probably produces a lot of false positives AND negatives.
And even at those failure rates (no matter how anecdotal), economies of scale creep in so a couple billion failures/day still would result in nearly a billion successes per year. The machine never rests and is fueled by creative people from all walks of life from every possible place on earth.
I have an ads account; I don't see them checking I haven't done a switcheroo on the landing page contents. I think I could easily put a JS redirect on the landing page, if nothing else worked.
They are reasonably strict about the keywords though -- I often go into a "verifying" stage when setting up the ads.
It's worth reminding your loved ones that the FBI specifically recommend using an ad blocker in search engines to avoid exactly this kind of scam [0].
> Use an ad blocking extension when performing internet searches. Most internet browsers allow a user to add extensions, including extensions that block advertisements. These ad blockers can be turned on and off within a browser to permit advertisements on certain websites while blocking advertisements on others.
This might not be sufficient anymore. Many online payments are rendered either on the shop's pages or on a third party payment provider, including 3DSecure implementations. These don't redirect to any sensible bank URLs.
Both of my banks use a payment flow which uses a hardware authenticator. But only one bank seems secure: it prompts for an amount and a reference and generates an OTP based on that. This is distinct from any other signing operations with the same authenticator. The other bank tells me to enter a 6 digit number (which is allegedly made up out of a part of the amount and a reference), but it is impossible to tell this apart from any other signing operation. It doesn't strike me as too hard to abuse that to either log in to my account, to sign another payment, or even to create a direct debit...
I ran into this. I'm trying to set up an account on wise.com. The way they want me to set up my bank for direct deposit is to type my banks password into their site! I asked support if there was any other way to do this (for example the regular institution, branch, account numbers) and they said no. But they reassured me that despite me typing the password into their site that they don't have access to it! (Ok, it was actually a Plaid iframe, but still not my bank. Clickjacking would also be very easy to implement and there is no way for the average user to understand this.)
Then banks wonder why their customers get phished.
It's not even their site as far as the user can tell. It is a full-screen iframe. At least if it was their site a bank could say "plaid.com is fine". Still bad to make acceptable domains more than one but at least it isn't infinite.
I have a couple of bills to pay to the city and the 3rd party pay processor (they switched a couple years back) they got looks like the page was made by a moderately talented 5th grade web developer. I actually called them to verify I had the exact URL correctly and also told them the page looked like it was made by complete amateurs and was kind of scary it was so poorly done.
I'd like to throw a little blame onto many namebrand websites.
Sites like Digital Ocean try to load dozens of third-party trackers for a single page. Their supposedly secure payment processing includes cross-site violations that are blocked by modern browsers.
When their credit card management pages fail to work with reasonable browser defaults or sane browser add-ons they immediately advise their users to strip out all security protections. You are supposed to just trust content coming from seemingly unrelated domains including multiple processors you may or may not have ever heard of. Paypal? Ok, plausible. Stripe? I guess, but both? Pendo? Sentry? Optimizely? Hexagon? Google Ads? Google Analytics? Six other different Paypal domains? Eight other Stripe domains? Multiple Typekit domains? TagManager? Spuare? The list keeps going.
Plenty of reasonable protections cause alarm bells left and right. The answer? Disable those protections. Train users to think they are the problem.
In the past I've heard people say the opposite - that if less computer savvy people are using google instead of URLs, it's a good thing.
The reasoning was it protects them against typosquatters and whitehouse.com situations. I guess when people were giving out that advice, google wasn't the way it is now.
Native app on phone > bookmarked site > typing site name (but only if using native browser password manager to auto-complete when the domain is correct).
Or something like that. I hate when I have to type site URLs from printed material (usually only doctor's bills, yet another reason to move to single-payer/socialized care) because I'm paranoid I'll get it wrong. Even more so with some of the janky URLs used by medical payment processors (contrived but realistic example: http://paymemoney.doctors.systemhealth.net/~drabdullahriaz/l...). Le Sigh.
"Always use a bookmark" has always been the best advice. I'm fairly sure getting a bunch of typosquatting domains is standard practice now for major (particularly financial) sites so typing in the site from a reliable printed source for the first access is fine (particularly since you can be extra careful if you only do it once). For using shared computers, I'd still personally recommend typing from a reliable printed source.
For logins, a major advantage of having browsers save login info is to recognize legit sites becuase the login can be filled out (though it should be set to require a click on the login form and not just appear). Occasionally sites change in a way that breaks this but usually just once to use a subdomain and can be investigated more closely when it happens.
I think browsers should add a "site bookmark" feature that uses a well known mechanism to allow all associated sites to be annotated in a way that shows up similar to how EV certificates used to work (but is entered by users). That would make it possible to recognize legitimate links into a site (as long as you annotate the correct site the first time) and there could be an option to be notified when leaving the annotiated set of domains for particularly sensitive sites. Currently the closest is bookmarking the home page, editing the URL to remove everything after the domain, checking that the edited url is bookmarked (this is fragile since sites change the redirection quite a bit), and then hold the back button and go back to to the linked page, although this might not work for additional domains (e.g. support sites are often on a subdomain). Ideally, the site bookmarks would also annotate search results before they are clicked. While "remember to check if the site is legit" is not ideal it is a far better situation than "no way to tell if the site is legit". This could also be used to add a standard OTP entry mechanism that binds to a site and gives a warning if it is from a site you haven't given an OTP to before or stored login info (and shows the site name when you enter the OTP).
There was a time wherein the top result for facebook was a blog which faced a deluge of comments complaining that they couldn't log onto their facebook.
In my experience with Bank of America and US Bank they bounce you around to several totally different top level domains as you navigate through the web-based banking.
These are third-party service providers that the banks contract for various pieces of their online infra… And it is a complete mess in terms of conditioning consumers to be phished.
that's true and kind of a joke by now, bofa has at least two parallel bill pay systems (both seem white-labeled from someone else?) keep redirecting through multiple domains, both are barely usable and take forever to load to do basic tasks. Security definitely takes a back seat when fighting with their UIs to get anything done.
Also that Google, as a search engine that is also the world's biggest advertising company really should be able to manage not to sell ads to phishing scammers!
Maybe if platform is large enough it should be criminally liable for phishing attacks. I see no reason why Google should not be responsible in vetting each and every link they advertise at top of their search results.
I don't understand - what would I block that's being delivered by my bank's native app? IF I can't trust their app, I can't trust the institution as a whole.
> So really the ideal is not just having an app that generates a token but one that generates a specific type of token depending on what type of transaction you're performing and won't accept, for example, a login token when adding a new payee. I haven't seen any bank with that level of 2fa yet, has anyone else?
Some banks in India have a separate “transaction password” that’s required to operate on the account vs just login and view balances. It’s not a rotating token, but it’s somewhat close to what you’re suggesting.
> My gut? It actually works, and people didn't like that. Users and orgs like authentication slightly broken so they can work around systems.
People like authentication systems that are secure enough to keep bad actors out, but not so secure that it keeps legitimate users out. It's got nothing to do with users wanting to break into a system.
It only works in a couple of situations and it's difficult to manage. When the site doesn't support it (which is almost all of them), when you don't have USB, when you lose or forget your YubiKey, when you don't have a phone with NFC or lose it, when you can't afford the device, or it's difficult for the user to set up, etc it fails. Now you need a different factor to finish logging in, which is probably weaker, so attackers will try to degrade this first factor to force the second weaker one.
It's a nice-to-have but not even close to a universal solution.
I like FIDO U2F as a second factor, although you always need a fallback of some kind in case you are stuck using a device without a USB port. I don't like it as a single factor, as most devices make it hard or impossible to back up your keys. Using Passkeys with Bitwarden is pretty interesting though, and appears to satisfy most of my concerns, as they're just stored in my password manager and move devices with me.
This almost happened to my S/O. Luckily I had setup NextDNS to block newly registered domains along with a list of uncommon TLDs so the site got blocked.
> Luckily I had setup NextDNS to block newly registered domains along with a list of uncommon TLDs so the site got blocked.
I go further: I generate tens of thousands of variants of all the "sensitive" websites we use (like banks and brokers).
All the "levenshtein edit distance = 1" and some of the LED = 2. All variation of TLDs, etc.
I blocklist most TLDs (now that most are facetious): the entire TLD. I blocklist many countries both at the TLD level and by blocking their entire IP blocks (using ipsets).
For example for "keytradebank.be", I generate stuff like:
I don't care that most make no sense: I generate so many that those who could fool my wife are caught by my generator.
I then force the browser to use the "corporate" DNS settings: where DoH/DoT is forbidden from the browser to the LAN DNS. I can still use DoH/DoT after that if I feel like it.
So any DNS request passes through the local DNS resolver (the firewall ensures that too).
My firewall also takes care of rejecting any DNS attempt to an internationalized domain names (by inspecting packets on port 53 and dropping any that contains "xn--"). I don't care a yota about the legit (for some definition of legit): "pile of poo heart" websites.
My local DNS resolver has 600 000 entries blocked I think, something like that.
I then also use a DNS resolver blocking known malware/porn sites (CloudFlare's 1.1.1.3 for example).
So copycat phishing sites have to dodge my blocklist, the usual blocklists (which I also put in my DNS), then 1.1.1.3's blocklist.
P.S: some people go further and block everything by default, then whitelist the sites they use. But it's a bit annoying to do with all the CDNs that have to be whitelisted etc.
If the business model of your search engine is based on ads, your (search user) relationship with them is fundamentally adversarial. Ad blockers will get you some temporary respite, but it doesn't change the nature.
This is an observation from a happy kagi subscriber that doesn't use an ad block.
FYI, passkeys do not require anything in hardware. You can connect them to software only password managers like 1Password or Bitwarden.
Where they are nice though is that they are also tied to a specific origin (domain), so a phishing site can't ask for the real passkey. But I've never seen a passkey be a primary source of authentication, so they can always fool the user to falling back to some weaker auth (email reset or 2fa).
> So really the ideal is not just having an app that generates a token but one that generates a specific type of token depending on what type of transaction you're performing and won't accept, for example, a login token when adding a new payee. I haven't seen any bank with that level of 2fa yet, has anyone else?
My local german bank uses an App specifically for 2fa. When i log in i have to approve the login within the app and the website redirects automatically. It shows me that I am approving a login or a transaction with all the transaction details. Since I don't enter my second factor into the browser, a replay wouldn't be possible and it would be VERY obvious to spot the difference between approving a login and approving a transaction.
German Sparkasse for those that care.
Instead, the 2fa app should show you the action you are authenticating, just like the SMS version.
But actually, we have put way too much stuff on the (inherently transient) web. What solves your problem is permanent client-side storage. Your friend shouldn't reach the bank through a google search.
Was this in the US or elsewhere, what was the amount and how long did it take to notice? Just curious.
In the US the bar to pull money out of an account is pretty low. Most banks would allow reasonably-sized transfers out with just routing and account numbers. I was stunned by this, but this is the reason utilities and stores can pull your money without you even talking to your bank. Just give them the info. And that information is not secret, it is printed on your every check.
The flip size is that for those "convenience" and service payments the money is easy to get back: banks, at least traditional, will bend over backwards to prevent being seen as enabling fraud.
> Was this in the US or elsewhere, what was the amount and how long did it take to notice? Just curious.
It was in Australia, amount was thousands of dollars, she noticed when she was asked to enter yet another code and all of a sudden it made her snap out of her "autopilot" and take notice and look at the URL and other details. So as soon as she realised that something was fishy, she logged into the correct site, then saw the money was gone.
> In the US the bar to pull money out of an account is pretty low. Most banks would allow reasonably-sized transfers out with just routing and account numbers. I was stunned by this, but this is the reason utilities and stores can pull your money without you even talking to your bank. Just give them the info. And that information is not secret, it is printed on your every check. The flip size is that for those "convenience" and service payments the money is easy to get back: banks, at least traditional, will bend over backwards to prevent being seen as enabling fraud.
This was a "pay anyone" transfer. So money was being transferred to a bank by BSB/Account number in the background. The bank required a code when a new Payee is added, but the codes were not differentiated, so she was asked for a code to login, then told the code was wrong and asked for another code. In the background the real banking site to which her actions were being replaced had successfully logged in and had initiated a transfer to a new Payee. The real banking site asked the attackers for a code to add the new Payee, the fake banking site asked her for a new code to login.
The thing that really enabled the attack is that the same code generator was used for both codes, without any indication that a different action was being performed.
For a long time (still?) Kraken also refused to add SMS 2FA as an option due to its weak security.
I still don't see how that's worse than no 2FA at all, which was an option, but I appreciated that they were banging the "SMS 2FA isn't very secure" drum.
It’s worse in a lot of implementations because often SMS is often used as part of a recovery flow in cases where you lose the first factor.
I find it more secure in some contexts to never give a company my phone number at all if possible, so that it simply can’t be used as any kind of authentication no matter what.
> So really the ideal is not just having an app that generates a token but one that generates a specific type of token depending on what type of transaction you're performing and won't accept, for example, a login token when adding a new payee.
My understanding of EU regulation is that it effectively requires this by requiring the 2FA to validate not just the identity but also the transaction (such as an amount, or destination account).
Unfortunately it means that all banks use SMS. We did have card reader 2FA that also did this but it's falling out of use because users don't like having to carry a card reader around.
Yes, the Payment Services Directive requires "dynamic linking" to a specific amount and a specific payee in article 97, and the RTS in article 5 go on to say that the payer should be "made aware of the amount of the payment transaction and of the payee".
The most elegant implementation I saw of this were card readers with a 2D (colored) barcode scan ; the 2D barcode contained transaction details that the card reader would display on its screen. This was an effective control against MITM. But even I myself always misplaced the card reader.
So now, most confirmations are done using the banking app. Even if I use a credit card by filling in its details on a US website, I get a push notification on my phone to confirm the tx on my app.
The app asks for a password or uses biometrics, so thats 1FA, and the app is enrolled at some point, so the token on your phone (I presume in some secure storage) counts as the 'thing you have' for 2FA.
Enrolling the app nowadays usually entails scanning your ID card and a 'live selfie' (blink your eyes). And of course you get notified (via e-mail) that you just installed the app on some device.
I preferred the blinky bars; the reader for them is tiny, not locked to an account, battery lasts what feels like forever, and they're cheap enough that you can trivially eat a loss (from forgetting where it is or leaving it in a place where it disappears before you get a chance to collect it).
The blinky bars were great! Already forgot about those. If I remember correctly, a problem with those were people with displays that had funky refresh rates? I think that in the current era that would be much less of a concern.
Conceptually it's great to have an actual physical, airgapped device under your full control as your signing device.
The difference is, it’s a pain, has happened twice in 5 years, and I know what triggered it, and it doesn’t happen with every 3d secure purchase or login.
This is not true, I have used multiple financial things where they have different codes for different uses (Raiffeisen, K&H) or apps which have a server sent event and local approval showing the transaction (wise, Fineco)
The way both my banks work is that I log into the bank, do something that requires confirmation, and then I need to go open my app to confirm it, and it shows all the details for what exactly I'm confirming in the app.
Banks like that exist. mBank from Poland does in-app approve/reject - similar to what you get on an Android phone when you try logging in on a new PC.
They also send phishing warnings when they find active campaigns.
That said, plain old social engineering works well on people. Last week one small-scale influencer fell victim to a bank transfer scam. Got phoned by a bank person telling her that her account is targeted by hackers, then a cybersec police head phoned her and asked to transfer her savings to a 'secure account'.
I am continually surprised that in a country as litiguous as the US companies can continue to sell advertising space and then just shrug when the buyer uses that space to defraud someone.
just this week, I clicked on the 1st search result ad for "amazon" in google search. It led me to a windows-themed "Virus detected" amazon clone. I'm not using Windows. I was able to close the tab, but it left a bad taste in my mouth for google search results.
(I know I could have just typed "amazon.com" and gone directly. But browser autocomplete makes it a tiny bit easier to use the omni-url bar and just type "amazon" than "amazon.com")
I wish we could break people of the habit of searching for websites that they visit all the time and using search results to navigate to them.
Maybe a secure browser profile that blocks search engine usage and can only visit sited in bookmarks or a whitelist so if you get a new bank and its not on the common whitelist have to explicitly add it to bookmarks.
Use your Chrome secure profile tm for banking and refuse to auto complete payment info on the insecure side.
When I tried to pay on a website a while ago I kept getting "unknown error". Fast forward about an hour waiting in the helpdesk phone queue, and turns out you need to set up a special password for that. This is not an "unknown" error, it's a known error... Why can't it just show me? Sigh.
I wonder how many people they've need to "help" with this. Yes, I know there's tons of old code in many banks, but they would have saved money if they had a single developer work on this full-time for a month or something. Support people may be cheaper than devs, but they're not free.
> the ideal is not just having an app that generates a token but one that generates a specific type of token depending on what type of transaction you're performing and won't accept, for example, a login token when adding a new payee.
I think at least some UK banks will do this. When I've done it using a card + card reader, you select the option to choose which type of operation you're trying to do. And if you're just trying to login it just displays a rolling code, but for authorisation of particular events it will take the form of a challenge/response, i.e. you have to select the operation on the card reader + enter a code provided from the site. This should I think prevent _simple_ replay attacks.
I even think for some transactions such as transfers over a certain amount, you have to enter the amount into the reader as part of the code generation.
Yes, my AIB card reader works like this. When transferring money to an unknown account I also need to enter the amount and "sign" that with the card reader. For adding a new payee it's a challenge/response.
I read it as the first 2fa code was used to login, then the system quickly attempted to add this new payee which required a second 2fa code, so the phishing site quickly sends another request stating the code saying the first was rejected.
Yep. A few simple steps like an extra SMS (or email) code to add a recipient, an email notifying about the change, not perfect, but will make this harder to pull off. Not sure what is '"pay anyone" payee', i don't think it's a thing at my bank. They could try to scrape the account number though, I think in the States that may be enough to try to debit someone's account.
Your solution wouldn’t have prevented the attack you describe unless the user can immediately tell the difference between login 2FA codes and “new payee” 2FA codes and knows not to enter one code into the wrong form.
Well, that's what I'm saying. When I get an SMS from one of my banks for example it says "your code to transfer X to Y is ABC" or "Your code to add a new payee is ABC". In this case she had a code generating app, but the codes were not different for login versus other high risk actions. The same is true for my other bank, which has a code which you use to login, and the same code generator, with no distinction, is used for example when you make a large BPay transaction.
Passkeys or FIDO hardware tokens are the solution, as written up by Google ages ago, because they only enter the TOTP code when the URL matches the right site, it wouldn't enter the code for the phishing URL
I feel like PassKeys and browser-integrated password managers both solve this problem better already. And yeah they're extra things to do, but so is this.
Because banks are financial institutions and every decision they make is based in that. If the cost of insurance is less than the cost to actually secure the system, they will choose that every time.
Banks and payment processors have some of the worst technical debt. For example, a lot of transactions are processed using the ISO8583 standard, a binary bitmap-based protocol from the 80s. The way cryptography was bolted onto this was the minimum required to meet auditing standards: specific fields are encrypted but 99% of the message is left plaintext without even an HMAC.
I don't work at a bank, but I do work in fintech, and this strikes me as excessively cynical. The reason banks are slow about this stuff is not necessarily because "it's cheaper" (though maybe it is), but because the complexity of any change is simply off the charts: money-related logic must work correctly, to a far higher standard than almost any tech company. It makes you conservative, in the same way that demanding 99.999% uptime is exponentially harder than demanding 99%, and makes moving quickly essentially impossible.
(Also, of course, they're probably working on COBOL stacks that were written in 1978.)
For a bank, pile on top of that mountains of (often conflicting) regulatory review, such that just about any change sounds the alarm for armies of nearby lawyers to swarm upon you and bury you in paper. All it takes 0.1% of annoyed users filing complaints that they can't access their accounts, and you might well be looking at a steep fine, a class-action lawsuit, or worse.
I've noticed my browser has started recognizing URLs that look similar to legit URLs of bigger companies and then warns me that the site is likely a phishing site. Sometimes it gets false positives for URL shorteners (like goo.gl instead of google.com)
My bank app asks for different tokens for different operations. A code for login, a code for transfers (the code needs to be generated with the payee account number as input). So it’s not a problem of tokens vs SMS.
I called this a "replay attack" because it sounds more like this:
"A replay attack in a network communications setting involves intercepting a successful authentication process—often using a valid session token that gives a particular user access to the network—and replaying that authentication to the network to gain access"
Even though this wasn't a session token, it was an authentication process and token, gathered from a fraudlent source and replayed to a valid source.
MITM is:
"A man in the middle (MITM) attack is a general term for when a perpetrator positions himself in a conversation between a user and an application—either to eavesdrop or to impersonate one of the parties, making it appear as if a normal exchange of information is underway."
So to me a MITM would be more like using a wifi access point to access the correct banking URL, but the service carrying the data was acting maliciously.
I'm not familiar with the nuances of terminology, but I would expect MITM to only apply when you (and your computer) actually attempt to connect to service A, and a malicious actor X intercepts that communication. Phishing is different in the sense that you connect to the phishing page directly, and it may or may not replay some of your inputs to the actual service it is phishing.
I guess theoretically phishing could be considered MiTM, but the latter term generally implies the attack is fully transparent to the user, whereas phishing convinces the user to insert the malicious party themselves.
Oh nah, I just check the lock icon in firefox, and it's a pretty unusual (and not publically accessible) cert authority so I'd notice if it's a different one
I read those messages. The ones from one of my banks that uses SMS and differentiates them, says "your code to do BLAH is BLAH". I was actually saved from phishing once because my credit card company included the vendor and the amount in the transaction SMS and it was for a different site and a much larger amount than what I thought I was spending.
It's really incomprehensible, whatever minuscule revenue Apple is getting from running this protection racket, I mean ad network, must be minuscule compared to the potential damage they cause to their users and brand. But I guess that's next quarter's problem.
Their ad business is generally booming but the brand rap can’t be worth letting these in. OTOH maybe it’s either all in or don’t bother. There’s no way to staff reviews at the scale you need an ads business to work.
I've long suspected that companies which force SMS 2FA don't really care about security, they just want your phone number, and 2FA is a convenient bit of security theatre to make you give it to them.
That’s definitely part of it. Phone numbers are the new SSNs - unique identifiers that never change and connect you across services - except you also hand them out to everyone you meet. One might say it seems like a bad system!
> the new SSNs - unique identifiers that never change and connect you across services
And just like SSNs both the "unique" and the "never change" are only true of the spherical cow version of the system. Phone numbers are actually substantially worse at being unique and unchanging, what with people in families sharing a phone or trading phone numbers, people forgetting to transfer the number when switching carriers, people intentionally switching numbers in an attempt to end spam calls... The number of ways to break the assumed invariants is actually quite high.
See Falsehoods Programmers Believe About Phone Numbers [0].
since COVID, i've had 3 new numbers. i'm sure that's an edge case, but it happens. my second number came when I brought my own device to a pre-pay plan on a new carrier that said my number was not able to be ported. then, when i upgraded phones, the pre-pay number was not eligible for carrying over to the new device.
I know I'm not the first person to be unable to port a number, so calling a phone number something that never changes is a bit skewed
Yeah, I didn’t mean that they never change in reality, but that they’re treated as if they never change. (Same with SSNs.) I can only imagine how many hundreds of services I’d lose access to if I lost my number. Hours and hours talking to customer service.
They often want your phone number for anti-fraud or anti-sybil reasons. If they have free accounts, requiring a phone number helps prevent you from creating a new account to evade a ban and makes it easier to link bad behavior across accounts.
Only by those who never worked on these kind of services. Running something like a webmail service is being flypaper for dickheads. As soon as you gain any sort of popularity you will have some very hard and sharp lessons about the lengths spammers will go through to make abuse your service.
First rule of designing anything: "if some cunt can make a buck by completely fucking over your system then that cunt will completely fuck over your system because that cunt is a cunt."
You don't even have to be running a webmail service, the instant you use any service to send an email with even one user-controlled field (even something as innocuous as their name) you already have a problem.
Yeah, I meant "webmail" in the broadest possible sense. And it's even broader than that: anything that allows making anything public really: from forum comments to Instagram to WordPress sites to Wikipedia.
Remember that for about ten years there was a person who consistently and frequently inserting images of ceiling fans in random articles.
They force SMS 2FA because it is a lot more frictionless to assume that your users have a phone number than to assume that they have a 2FA app installed on their phone and know how to manage those tools. It's also easier to support.
Ugghhh, "frictionless" as if we're talking about logging into Candy Crush here. Are the "Growth Hackers" infiltrating banking apps now? I don't want my bank software to be frictionless. I want it to be secure.
The person I was replying to said "they just want your phone number", which to me implies that we're not talking about 2FA at the level of banking apps, as the bank already has your phone number, among plenty of other details. Most banking apps I have used also do not use SMS 2FA.
Or they were using 2FA by email until an auditor told them "that's not 2FA" at which point they realized that their middleware to send notifications supports SMS as well as email.
I don't quite understand that. It's not like sending an SMS to my phone is any more secure or harder to access than sending an email to my phone. Additionally, many seem to want a "real phone number", not a VoIP number like Google Voice.
Meanwhile treasurydirect.gov still just uses a verification code via email. If it's good enough for the Treasury, it's probably good enough for a bank.
> It's not like sending an SMS to my phone is any more secure or harder to access than sending an email to my phone.
It's not, but much like 'fax' hanging on in the medical environment because it has been labeled "secure" by the regulations, there is a line in some regulation rule somewhere that labels "SMS" as "secure" but does not label "email" as "secure", and because they do the minimum to meet the regulation, they go with "SMS" and go on about their day.
'Fax' is 'exempt' not secure. So of course many places will take the relief valve even though the service is both a poor fit / quality and horridly insecure. When it works right the records aren't even in a sealed envelope but just on the output of some printer somewhere for anyone to see!
Sending a one time code by email means that anyone with access to them inbox can both reset the password (first factor) and intercept one time codes ("second" factor).
Using SMS creates an opportunity to dissociate those two situations.
You can access e-mail from outside of your phone, but SMS usually not unless synced with cloud. If your e-mail gets hacked then all of yoir 2FA everywhere with e-mail would be useless.
The perspectives and interests of NIST and the things that a service provider has has to worry about with respect to their customer/user experience are not necessarily aligned.
Customer: "What do you mean two factor app? I thought the code was supposed to come to my phone?"
Support: "It did, but we no longer support SMS two factor authentication."
Customer: "But I had no problems when the code came to my phone."
Support: "Yes, but NIST recommends that we don't use SMS 2FA"
Customer: "What's NIST? I'm finding this very frustrating, I need to get into my account."
> Customer: "But I had no problems when the code came to my phone."
"Unfortunately, many of our other customers, and customers of other financial institutions were not correctly protected by the code alone.. and were still getting scammed or confused.. and losing _all_ their money."
> Customer: "[...] I'm finding this very frustrating, I need to get into my account."
"That is understandable, but we take the security of your account and your personal information very seriously, and this requires us to make changes to maintain that security in the face of new threats and actors as they evolve."
It is very much software-world-thinking to believe that dismissive Kafkaesque responses that just shut down the conversation without addressing the fundamental issues around usability and customer satisfaction will ameliorate the situation.
For a lot of service based businesses they see their customers face to face and it is imperative that the customers have a seamless experience. Imagine having a business where customers who can't sign into some online system you have are bringing in old Android phones and wanting help from your staff members on how to get 2FA set up on those devices and it is easy to understand why many such businesses settle on SMS based 2FA.
Most large banks already largely offer non-SMS 2FA through their companion mobile apps. This is about pretty much every other service you have that does not have a dedicated mobile app and doesn't want to teach their users how to manage your 2FA codes.
The problem with the above statement is that merely “offering” a better option doesn’t solve the issue. The mere presence of SMS as one option gives the same risk as if it were SMS- only. An attacker can choose the sms option (after slipping $100 or even just a fake ID to the teen at the phone store to sim-swap you) even if you never would use it. It needs to be at minimum able to be permanently disabled on demand.
I suppose, but in practice nearly all systems I’ve seen allow the attacker to opt for SMS, on demand, unless you’ve been allowed to not put in a phone number on file. Which is not always the case.
it would be most convenient to have no 2FA. hell, skip the password too, then nobody will forget theirs. security is tradeoffs, but NIST says "if you take security seriously, you should not use SMS 2FA".
It’s all a gradual improvement over time though, as both companies are able to adopt better practices and customers become accustomed to it. Many, many more people are using TOPT than a decade ago.
> Bank apps not running on phones where security has been compromised seems entirely reasonable.
I have root access on my laptop and I log in to my bank's website just fine. Making apps not run on rooted phones is just perpetuating the cycle of forcing users to comply with the restrictions placed upon them by Apple and Google. Root access != less secure. It means control over the device you paid for and own.
I don't think the root permission ban is for the website. In most cases it's about how your phone + the bank's app has become the new hardware token / key generator. Before smartphones I could log on to the bank's website but any transaction will have to be authenticated using a hardware token (presumed secure). That's moved into an app now.
...and you're probably less safe as a result. In the 90s and early 2000s, running as root (admin) was the Windows default for home computers, and that's why we had such a malware and spyware problem then. It wasn't until UAC limited user and app permissions on purpose and Windows Defender became standard that it began to get better.
Root access for you means you have control, sure. But it often does mean you're less safe too, depending on your OS's security model and what other apps can run as you. That's why limited sudo and other "root ish, but only in small doses" models were made. And that's assuming you know what you're doing.
For Jane Grandma, root of any sort means power she'll never need and a footgun to lose her life savings with. It's a good thing mobile phones protect ordinary users from themselves. Most people don't need root access any more than they need the ability to reprogram the ECU on their car.
Besides, on a rooted phone, I thought there were already ways to fool an app into thinking it's not rooted...? Or did they change that?
Only if I grant them root, which I'd only do to a very small number of open source apps
I instead have to use my desktop web browser, and desktop operating systems have a far worse security model than Android. No special permissions are generally needed to capture the screen, capture/inject keystrokes, or open .mozilla/whatever/cookies.sqlite
So my phone is still the significantly more secure environment. The fact that I have the ability to grant root does not make it "compromised"
But that's exactly the point. The bank doesn't know what you've granted root. It doesn't know if you're a security researcher, or somebody installing pirated apps with spyware.
The bank can't enforce that on desktop web browsers, but at least it can on mobile.
Hot take: rooted phones are inherently less secure. That does not include GrapheneOS btw, since you don't have root privileges on an official build of GrapheneOS.
I'm much less worried a hypothetical attack where I accidentally give sudo access to a malicious app than I am about the well-established ongoing attacks where Google violates the entire population's privacy, or the regular stream of malware that makes it into the official app store.
Not that long ago it was considered a problem to have a rootkit on your machine [1]. Nowadays it's getting hard to acquire a device that hasn't been rootkitted at the factory.
There's always a root account, the only issue is who has access to it.
So... phones where a corporation has root are more secure that phones where the owner has root, you say? Secure for whom? For the user? Seems obviously wrong. It's more secure for someone else to have power over you?
Again, you're just a few words from "Freedom is slavery".
> So... phones where a corporation has root are more secure that phones where the owner has root, you say?
You're putting words in my mouth that I explicitly rejected when I said "that does not include GrapheneOS".
Just to prevent the follow up "well actually GrapheneOS is an organization": they don't have any kind of root access to GrapheneOS phones. The only thing they can do is push system updates, which you can (1) reject and (2) verify if they are the same updates being pushed to all users, to avoid targeted attacks.
> Secure for whom? For the user? Seems obviously wrong. It's more secure for someone else to have power over you?
Yes, secure for the user. Sure, power users that very carefully review any system mods they install with root powers would have the same level of security as with a non-rooted phone.
But most people won't read the source code of root apps/extensions they install.
It's easier to tempt mobile phone users to install "cosmetic improvement/customization whatevers" that happen to require elevated privileges, than desktop Linux users.
It's well known that many Android apps bundle near-malware that slurps all data possible, and will ask for root privileges if that is detected.
The fact is that mobile phones tend to contain more sensitive data than desktop computers (and are thus significantly more secure by default than Linux/Windows computers). Contacts, private messages, photos, etc. It's a more valuable target, so more effort is put in developing malware for phones.
> Hot take: rooted phones are inherently less secure.
My computer is rooted, making it inherently less secure than my phone, yet I have no trouble accessing my bank website. What threat is a bank protecting against by disallowing app usage on a rooted phone?
When I access my bank from my computer, I need to authenticate using a secure token, where my options are an RSA-style dedicated device or a secure (non-rooted) smartphone.
* computers have always been "rootable", so the banks can't do anything about that
* phones work with "apps", which are viewed as more dangerous than websites. So they came up with the concept of app curation (monitoring large appstores for lookalikes and viruses), and by rooting/sideloading you are violating that model.
* Repackaging a legit app into a malicious lookalike is relatively easy on Android, but harder to distribute if you combat rooting/sideloading.
* if your phone is rooted the bank may be concerned that you could be more susceptible to installing dangerous things, including apps that intercept your 2fa.
You can argue whether these points held up over time (or whether they make things more secure), but that seems to be why they do it. It costs them relatively little to try to combat rooting but potentially liable for losses if people get phished/hacked so...
The article conflates two issues that have different security implications.
The "1-click login" links are a concern and just having access to the SMS would be enough to take over things like WhatsApp.
But 2FA codes seem notably less worrying.
They are the second factor and require an attacker to have the password too.
For these cases I'm much more relaxed about the use of SMS and the risks of interception.
I didn't think of 2FA as being protection against password reuse. People should still avoid reusing passwords and change them if they know of a breach.
Are there really attackers who are picking up breach databases and then sim-swapping to get the 2FA as well?
I think 999 of those databases are the same data set. I lost a password ten years ago from a blog breach and I get almost a monthly notification about it showing up again and again.
In the UK it seems that almost all online banking transactions are now verified by SMS. As far as I can tell this is required by law, and replaced the previous, bank card + card reader + pin verification system, which was not only more secure but also did not depend on having a working mobile phone with signal.
I hope that this will in due course be recognised as a terrible mistake and rectified. Unfortunately my hope is only faint.
It is amazing what a little cooperation between public and private institutions can achieve. It is the only way to login and 2fa to government services and most banks (some legacy systems are still supported by banks) and it works great.
It is incredible there is no system like this for every country, heck it is incredible that there isn't a system like this for the whole EU.
EU is introducing Digital Wallet for this. I hope it will nicer to use than the Finnish version of BankID. Also would be nicer to be less dependent of banks or other private rent-seeking institutions.
It is pretty complicated for the average person to install custom certificates on their OS. I didn't even knew that BankID supported it, pretty much every one uses the app.
The CCC definition of this being only 2FA-SMS is incorrect though. It was not only Twilio Verify (2FA API) that was affected, it was all SMS sent through this vendor.
It is not but CCC is indicating that this provider was only used for 2FA. Sorry I was getting a bit ahead of myself here, this was earlier exposed as a breach of Twilio's vendor (IdentifyMobile). In the case of Twilio they offer an API for 2FA, Twilio Verify. I wanted to clarify that this breach was not only for 2FA, Verify API in the case of Twilio, but for all SMS sent through IdentifyMobile.
It is an SMS issue in the sense that OTPs and hardware tokens don't require their rotating secrets to be written to some potentially publically-readable datastore. This specific attack vector simply does not exist for those technologies.
I don't see why SMS would need to write to a store, public or not. One can implement SMS-2FA using TOTP for example, it's just that the TOTP secret is not shared with the recipient.
Yes, it is not a technical necessity to store these messages. But there is the option to do it (and some people are evidently doing it). The point is that for one-time-passwords, it's not even an option, not matter how hard you try. You simply cannot make this class of mistake. Unless you try really really hard to fuck up and, say, for some very weird reason, exfiltrate the one-time passwords generated on the user's device every few seconds.
What if my OTP base data is exported to a publically-readable datastore? I could be tricked into exporting the QR codes from Google Authenticator, for example. Though I see that there are significantly better 2FA methods, it does seem like the biggest flaws with SMS 2FA are in the insecure implementations, not the actual concept.
This causes far more harm than good - even this article admits SMS 2FA is better than nothing. For several 99.99999% of use cases, it is fine, SIM swapping is an extremely targeted attack. If you are the type of person that can be targeted by an attack like that, don't use SMS for anything important. Simple.
This would still be a targeted attack if exploited, and arguably much more difficult than sim swapping. And yes, I did RTFA, and my point still stands.
Random thought I’ve been having as we keep bringing this topic up these past few weeks…
How interesting or uninteresting would bi-modal 2FA be ?
That is: you receive a code by text and you enter the code by email…
I haven’t spent any time to work out whether this significantly changes the attack surface but… At first glance it does seem like you would need to own two different account types…
… So I guess a first question would be: does this exist anywhere? Has anyone ever seen this or done this?
Bi-modal 2FA is already here: you receive a code by text and you enter the code in your web browser (or a proprietary app like a banking app).
Moving from web browser to email for entering the 2FA code means that you (the user) have to make sure to send email to the correct address, not one provided by the attacker.
I like that IdentifyMobile's website[0] isn't even protected with a valid HTTPS cert. Falls back to HTTP. Oh and it's WordPress. And last updated 2015. Guess that's all telling. Nice that so many important companies used this crappy provider for such things.
Out of curiosity, I just tried with ChatGPT 4o... Screenshot of a legit banking website and asking it to describe it to me, to give me the exact URL in the screenshot and to tell me if it's legit or not.
It described me the whole page, explaining it was a login page to log in to bank X in country Y. He compared the URL with the bank's name, etc.
I know everybody's doing it because they don't know better, but it's a terrible idea to make the inductive leap from one successful sample to some abstract sense of what a ML model is suited for. Especially for anything important.
As a sibling comment noted, performance will almost certainly be sensitive to temperature (randomness), exact prompt phrasing, exact sequence of messages in a dialog, and the training-data frequency of both the site being analyzed and the phishing approach used.
One could conceivably train a specialized ML model, perhaps with an LLM component, to detect sophsticated phishing attempts and I would assume this has even been done.
But using a relying on generic "helpful chatbot" to do that reliably and sufficiently is a really bad idea. That's not what it's for, not what's good at, and not something its vendor promises for it to remain good at even if it happens to be today.
that's called a hallucination. AI models are simply guessing what to say with differing sizes of word banks
At it's best, it may even "recognize" the top 90% of sites. Often, it's not a bulletproof solution, and shouldn't be trusted to generate either false positive/negative
My best operational security advice is not to click shit in your inbox and navigate directly to the hostname you trust to do sensitive actions
In Singapore, the banks have moved away from SMS entirely, even for notifications. Now they have to come through the app.
But for login you basically register a single phone, download a certificate to it and that becomes your second factor. If you login via web or another phone, you need to approve the login from that phone.
Of course if you lose the phone (or it's damaged) you need to go to the bank to fix it, but that seems like a reasonable approach.
The rule of thumb is that you should always avoid any services that still rely on SMS or phone numbers as an ID or 2FA. They simply don’t care about your privacy or security, even if they advertise it. A prime example is Signal.
Unfortunately, for some other services, like banks or government agencies, you don’t have any option. You can only minimize the impact by using a unique password and username and keeping them updated.
For a sophisticated user who can confidently use distinct and strong passwords for each service and protect those passwords, SMS-based 2FA offers minimal safety improvement.
For a business, they know that a significant number of their users don't do this. These users are exposed to credential stuffing attacks. SMS-based 2FA means you need to phish somebody (or otherwise obtain the code). That's an improvement for these users.
The only time where there is an active reduction in security is when SMS can be used as single factor. This is frustratingly common for password reset flows, which allows a sim-swap attack to fully compromise an account.
We've seen companies do a lot of silly things with SMS. Facebook used 2FA SMS for ads [1]. Companies sometimes use your phone number from SMS 2FA as a single factor for password reset. I think this is debatable.
I would argue that a 1FA unguessable password used once is just as good. Certainly better than the case where the provider offers account resets using just SMS thus having effectively 1FA SMS.
That really depends what else the company uses your number for now that you have given it to them for 2FA. Often enough it ends up being usable as a one factor for account "recovery".
The linked article says that at the very end, in the very last sentence, just so they can evade this kind of discussion. Clearly the takeaway any regular user (also the typical too-pedantic-for-their-own-good HN commenter) is going to take away is clearly "Don't use SMS 2FA", and they will therefore make the wrong decision.
Use 2FA. Use 2FA. Use 2FA. Worry about the design decisions in your spare time.
Exactly this. The concerns about SIM swapping are real but simply do not apply in 99.999999% of cases. It's an extremely targeted attack. Adoption rates of SMS are higher than other more secure methods like authenticator apps, and given the choice of no 2FA and 2FA SMS, you obviously should pick the latter and understand it isn't bulletproof. I find it difficult to come up with any argument otherwise.
I think there is this false idea that if SMS was not an option, people would gravitate to authenticators and other such solutions. I've provided technical support trying to get supposedly technical people to use these tools, and trust me, there are huge hurdles of adoption here. The amount of people that are unable to enter 6 digits into a prompt within 15 seconds is astounding.
Passwordless solutions are cool, and I have implemented them, but are extremely prone to footguns.
I think conversion rate and support cost associated with 2FA-OTP are worse enough for SMS to still be worth it, especially as a phone number also gives you a good marketing ability and a reasonably unique identifier for a user.
That is what everyone dances around in these discussions. It doesn't matter if it is a good second factor because it is an excellent user tracking identifier and that is what they were really after. Twitter and facebook both lied about only using these numbers for security and then almost immediately put them to use for advertising purposes. We only know about it because they were big enough to sue, I'm sure every crappy site that gets the number sells it. As a bonus, it also allows them to dump a lot of the infrastructure and support problems onto some one other than themselves.
The biggest problem with SMS-2FA in my opinion is a lot of places are setup so it isn't even a second factor. I can often reset my password just through email so it just seems like throwing a threadbare blanket marked security over the top of a user tracking scam.
Certain financial institutions in some regions mandate telephone-network based 2FA for their customers accounts, and in the event of an account compromise attempt to pin the onus of liability on the customer. Maddening they wont give customers better options if they want to secure themselves.
It always feel useless when you get the second factor on the very device you are logging in from. I know it's not because you still have to physically have the device but instinctively, I always think true 2FA should involve different devices.
"CCC researchers had live access to 2nd factor SMS of more than 200 affected companies - served conveniently by IdentifyMobile who logged this sensitive data online without access control."
> most of them should be able to build their own service.
Isn't the hard prt the connectivity bit i.e. negotiating with the various telcos? I once saw a telco use a third party SMS vendor for messaging their own customers for an app - because setting it up internally was too much of a hassle.
No, the hard part is having to secure all these little random services that I've now built. Why would I not just pay for someone whose job it was to worry about this instead?
Not in the US at least for those companies, but the world is a big place and this other comment https://news.ycombinator.com/item?id=40935323 mentioned places like Gambia and Burkina Faso... It just makes sense to outsource local delivery to companies that are better connected locally.
Yes, and there are multiple levels of aggregators. For example, in a past life, I built SMS APIs and back-ends, including ones used by smaller telecoms to enable their subscribers to send/receive SMS. (We were pretty small, and only accounted for something like 0.5% if US SMS traffic)
We connected to multiple aggregators. It's been a few years, but the big players in the US (Verizon, AT&T, Sprint, T-Mobile) were split between different aggregators. It was a similar situation in Europe.
A big part of working with a new aggregator was a full review of security and privacy, and that became even more important as we began the process of being acquired by an F100 company.
I'm still trying to figure out why messages were stored in S3 buckets to begin with. That's an architecture choice that makes little sense to me, especially since the limited size of SMS makes them pretty space efficient.
We at MakePlans were affected by this breach as we use Twilio. We are not using Twilio Verify (their 2FA api) but rather handle 2FA SMS ourselves in our app using Twilio as one of our providers. So the CCC definition of this being only 2FA-SMS is incorrect, it was all SMS sent through this Twilio third party gateway that was exposed to a limited set of countries (France, Italy, Burkina Faso, Ivory Coast, and Gambia).
GDPR is not necessary applicable here. An SMS gateway is most likely classified as a telecom carrier, and thus any local telco laws would be applicable and not GDPR. That applies only to the transfer of the SMS though, so for example a customer GUI of sent SMS would be out of that scope.
(And before someone tells us that SMS 2FA is insecure I would like to point out that we use this for verification purposes in our booking system when a customer makes a booking. So for end-customers, not for users. It is a chosen strategy for making verification easy as alternatives are too complex for many consumers. All users however authenticate with email and password, and have the option of adding TOTP 2FA).
I think 2FA via texts is better than no 2FA. But only if you do not make the texts world readable.
Apart from that, to me it seems justifiable to follow a risk based approach. Booking systems up to a certain value/amount, fine. Online Banking and health related services, thank you, no.
It's not really 2FA even. More like a magic link (which is what we use for verification via email). The customer has no password, just verifies using a code via sms/email.
It’s for the booking site so most visitors come to make a booking thus conversion rate would be high generally. We never had passwords there so can’t compare conversion rates.
For signups to our app (to get an account with a booking site) we require a password.
How about the login service send the code encrypted in the SMS such that it can only be decrypted on the phone of the actual user? Still vulnerable to phishing attempts, but better than relying on deficiencies of SMS technology .
The modern auth invented just to push mobile + cloud model is DISGUSTING. We have since decades smart cards for various things, from payments to IDs, why the hell not keep inserting readers in keyboards and laptops bodies, selling cheap desktop USB reader and teach people to use them? Simply because with them there is no way to force mobile computing allowing some third party to snoop a bit in end users lives.
I hope a day or another people will understand and IMPOSE an end to such crappy unsafe practice.
Many people today don't even own a computer and do everything on their phones. Teaching the masses safe habits rather than convenient ones is a difficult problem, most don't care.
You can use Yubikeys, which are basically the modern and better version of "smart cards", on phones and tablets just fine. I have a Yubico Security Key on my keychain and I can use it on my iPhone with NFC or with my iPad using USB-C.
You need it. While your bank already gives you (typically) a card you can also use as is for auth for them. Your country probably have some e-documents already, no need for extras to authenticate the public sector services and so on.
The point is offering something already usable and gives people a habit on that. After we might add yubi for generic services like GMail and so on.
A bank card to pay stuff, witch is a smart card, NFC capable, you can use (as is common in various EU countries) to authenticate yourself on your internet banking.
Similarly various countries offers eIDs (some I know Estonia, Belgium, Italy, Germany, France) witch are NFC ISO 14443A/B who are used to authenticate the Citizen on various public services.
Many universities and some high school as well offer an NFC badge witch is a smart card, and could be used to authenticate institution website and so on.
All those examples are already in use since years, but used for limited activities and mostly not advertised. It's just a matter of spread them. In Italy for instance since some years national eID card (CIE) is used to access fiscal services to send for instance you filled tax forms, to pay some tax and so on, while national health service card is used to buy tobacco from every automatic vending machines since much more (to prove you are >18 years old), France start since last year the same with France Connect+ witch as Italy, German etc is the pan European eIDAS system to offer digital docs and services to all. All countries have invented absurd systems to AVOID using eIDAS with smart cards in most cases, while we all have them. Only to push the "app" cloud+mobile model.
My Visa card definitely doesn't work for any online bank authentication in Finland. It's strictly for payments. For authentication, it's user ID + PIN with a paper two-factor, or user ID + phone authenticator. Some banks also have physical two-factor hardware.
Well, in Germany, Nederland, Belgium Visa, Mastercard works so, I imaging is just a matter of choice from the bank side. In Italy RSA token (small key chain with an LCD display) was fairly common as another option and some banks have solved the PSD/DSP2 article five with a captcha post-OTP for transactions (i.e. Unicredit), few have chosen more complex OTP with a cam to read a Qr but they are simply too expensive to became spread. In France curiously most banks still do not use a second factor allowing login with just ridiculous "random sorted" virtual keyboards to makes keylogging not work. I guess the world is vary, but I'm also sure enough that Finland have some eIDAS eID document witch can be used like bank cards.
Pretty sure that neither my Visa Credit/Debit or my passport works for any kind of digital authentication. I think you can specifically get an ID that works as a smart card, but since you don't need just the specific ID card, but also a reader + faffing about, adaptation is super low.
Parent's point is that the hardware is perfectly able to identify you, but we choose not to.
In 2024 having a card reader is indeed not that great, but I still have the one given by my bank ~20 years ago, as it's a strong factor which I can use to set up weaker second factors (typically push notification to the mobile app, nowadays).
We could imagine several ways people link their real, physical government ID to a trusted device. Every smart phone has had a built-in security key for the past 5 years or so. Banks have to check your ID at some point due to KYC. We could kill multiple birds with one stone.
How many have a smartphone with a cover able to hold cards? How many have wallet in their pockets? Where the trade off in usability? Having a sole pin and a card to access various services instead of passwords and copypasting OTP or something similar with crappy and dysfunctional apps.
> How many have a smartphone with a cover able to hold cards?
I use a wallet that holds cards, but not common or popular, and are you seriously suggesting that we insert this thing into our phones, which would probably mean you'd have to dislodge from the case, wallet or not, and align the card into the slot. Not to mention how much space it'd consume in a smart phone. You & maybe a very tiny cohort want this, the general public don't, especially for the marginal security benefit. Anyways as others say, the modern equivalent is NFC, but again getting everyone to buy and carry an accessory is asking too much. Modern smartphones already have modern security and in recent years have been exposing their security coprocessor chip to the OS.
no need to "insert" most smart cards nowadays are NFC and most smartphones have a reader built-in in their battery so all you need is just flipping the "book cover" to allow reading, even without extracting it. On a desktop having a small usb flat reader or one built-in in the keyboard (common two decades ago in various setup, for contact based smart cards back then) or one aside the touchpad area in a laptop could provide the desktop part.
I use it normally to declare my taxes for instance, with a small desktop card reader (ReinerSCT CyberJack) as a "security device" in Firefox to authenticate for instance, just putting the card on the reader, open firefox going to the relevant website, click on eIDAS login, entering the national ID card PIN and being in. A pin for all public sector services, no apps needed, no regular password changes and so on.
We have that with FIDO2, unfortunately there is too much $$$ to be made perpetuating the problem, propping up adjacent ecosystems like cloud and leaky auth apps.
That's because SMS verification isn't 2FA. It's faux 2FA. You don't possess your phone app or your phone number... it can be cloned and intercepted. A key you hold on your person is 2FA.
Any push-based service would be vulnerable to this, wouldn't it? The medium doesn't matter if somewhere in the chain someone stores the message (in public).
Twilio said the data was accessible between May 10 and May 15, 2024[0].
I mean, even if we disregard the auth codes thing, which according to CCC were being generated on a static timer, if someone did get access to this bucket - they would have gotten away with a juicy list of phone numbers and names from some of the top companies, at the very least.
I'm not sure how hard it would be for an S3 scanner to guess "idmdatastore", so it is difficult to say if anyone else got in. Even if not, a live database storing live data without encryption or anything is crazy. I feel like IdentifyMobile will feel the wrath of this no matter what.
Wow SMS 2FA forced bullshit that suddenly got astroturfed right on the day of the Snoweden revelations is actually indeed bullshit. When will they have opt out of this or is this just the end of the web? 20 years ago I did not need or want anything more than a password (obviously cryptographic key auth would be better but not if it's brought to you by X.509). And of course all the HNers who eat this shit up and defend it like little dogs are suddenly on the other side. Email verification is fucking dumb too, and of course now every email forces phone SMS shit.
Can someone explain to me how SIM swapping actually works?
All the articles and videos I found are like:
1. Attacker calls phone companies support hotline or alternatively his confidante there
2. ** MAGIC **
3. Atacker has access to SMS messages sent to victims number
I understand that some might be deliberately vague but I don't want a step by step instructions, just a high level technical overview.
And to give another hint why this is so hard for me to understand: To the best of my knowledge, if I call my phone company with whatever scenario that I can imagine that involves my SIM, all they will do is send me a new SIM to my physical address.
If you have a never registered, not expired SIM for a carrier, the carrier can register it to an account given the IMSI. You can also do this with eSIM without needing a physical SIM.
So, step 1, convince the carrier representative. Step 2, give the the IMSI. Step 3, put the sim in your phone and receive SMS.
If you do step 1 in a physical store, the representative will probably give you a new sim from their stack even.
> And to give another hint why this is so hard for me to understand: To the best of my knowledge, if I call my phone company with whatever scenario that I can imagine that involves my SIM, all they will do is send me a new SIM to my physical address.
That's basically SIM-swapping. The only step you haven't described is getting the new SIM sent somewhere else, which probably isn't too hard a thing to achieve given sufficient corruption.
Ultimately, the phone company uses its information to work out where to send an SMS, and that information is an entry in a database - SMS to number X is routed to SIM card ID Y. If an inside job can change that database entry for a while, that's enough to attack SMS-2FA.
Except for state-level attacks (in which case you're screwed anyways), in some countries the process tends to be lax (on-the-spot issuance of replacement SIM without robust identity verification or allowing SIM replacement to any arbitary address without verification). This also does not consider insider attacks, where people in the company... can just re-issue any SIM for any number they please (and therefore there are people who are willing to issue illicit SIMs in exchange for money).
Google lets you choose which authenticators to use (SMS, push to mobile, TOTP, etc). It sounds like you should disable push to mobile for your accounts.
I can't think of any reason why we should not make password managers mandatory for all web authentication today, with the password manager being the 2nd factor.
Your desktop, laptop, tablet, and phone can all share a password manager. They work offline and online. Passwords generated are unique, breaking password reuse attacks. Password managers support auto-filled TOTP codes per-login. They support passkeys. There's password managers built into browsers in addition to the 3rd party ones. There are personal, family, and enterprise options. They could be installed as a system service to isolate them from userland attacks. They support advanced functionality like SSH keys, git signing and biometrics.
If you're a stickler about having a completely independent factor from your desktop/phone/etc, password managers could be used with different profiles on different devices, and allow several easy ways to pass an auth token between devices (via sound, picture, bluetooth, network, etc), ensuring an independent device authenticates the login to avoid malware attacking the password manager.
We already have the tools to do something way more secure than SMS, and it's already on most of our devices/browsers. We just have to make it the preferred factor.
The tools aren't the hard part. The hard parts are adoption and recovery.
SMS has an extraordinary advantage in that the vast majority of people transparently have access to it. No need to download another app. No need to install anything. No need to buy a special usb device. It also has a recovery mechanism built in, as the carriers will all let you move your phone number to a new device. This, of course, comes with the high cost of sim-swapping attacks. But few companies will be happy with "customers just lose their accounts when they drop their phones in the toilet."
We'll see if the google/apple security key system takes off. That's probably the best bet we've got given the ubiquity of these ecosystems.
How I would loath to rely on Goole or Apple to be able to make payments or confirm other actions. Sure as hell they would call home about what actions I am performing, and associate that data with some Google account or Apple Id or so, that they will force me to have.
That's fine. I don't think any individual is foolish for preferring to keep these companies out of the process.
But it is just undeniable at this point that any authentication system other than raw passwords must come from any already ubiquitous ecosystem that doesn't require people to download, install, or buy anything new. Hoping that yubikeys take off is fantasy.
> I can't think of any reason why we should not make password managers mandatory for all web authentication today, with the password manager being the 2nd factor.
A password manager is, in essentially every respect except interoperability, inferior to WebAuthn. Let’s not make an inferior solution mandatory when we already have a superior solution.
> I can't think of any reason why we should not make password managers mandatory for all web authentication today, with the password manager being the 2nd factor.
Basic usability? The security theatre is making computing more and more yanky every year, with questionable benefits, and with no regard to the drop in efficiency.
For most accounts I don't care much if they are compromised. And have never been compromised even with a lot of "worst practices".
Would you agree also that MFA should be mandated for everybody's doors? Or to my bike?
> Would you agree also that MFA should be mandated for everybody's doors? Or to my bike?
Attacks in the digital world are simply more scalable than in real world. I can try to log into 1000 Gmail accounts in seconds, but it'll take me hours to try to open 1000 doors.
I have accounts with 2 banks, one uses SMS 2fa and the other uses an app which generates a token. I had thought that the app was by default a better choice because of the inherent lack of security in SMS as a protcol BUT in the above attack the bank that sends the SMS would have been better because they send a different message when you're doing a transfer to a new payee than when you're logging in.
So really the ideal is not just having an app that generates a token but one that generates a specific type of token depending on what type of transaction you're performing and won't accept, for example, a login token when adding a new payee. I haven't seen any bank with that level of 2fa yet, has anyone else?
I guess perhaps passkeys make this obsolete anyway since it establishes a local physical connection to a piece of hardware.
[0] Ron Howard voice: "she eventually got it back"