There's an important omission in the article and the top comments here don't mention it either: Accidentally tapping "Allow" does not allow the attacker to change the password on their web browser. When you tap Allow on your device, you are shown the 6-digit pin on your device and you can use it to change your password on your device. The final part of the attack is that the attacker calls you using a spoofed Apple phone number and asks you to read out the 6-digit pin to them. If you choose to give out the 6-digit pin to the attacker over an incoming phone call, then they can use it in their browser to reset your password.
It's surprising that Krebs chose to omit this little detail in the security blog and instead seemed to confirm that someone could completely give away access to their account while sleeping.
He describes this in the very first paragraph of the article:
>Assuming the user manages not to fat-finger the wrong button on the umpteenth password reset request, the scammers will then call the victim while spoofing Apple support in the caller ID, saying the user’s account is under attack and that Apple support needs to “verify” a one-time code.
That seems to be an entirely different point. Krebs suggests repeatedly that all you need to do to get hacked is click "Allow" in the push notification. This is demonstrably false.
"Assuming the user manages not to fat-finger the wrong button" means "assuming the user clicks Don't Allow". They call on the phone to try and convince the user to say Allow next time.
Of course that's kinda BS too, because the only time "Allow" gives you a six digit code is if you successfully authenticate your apple ID on a new device. If you get the reset password dialog, the result of Allow is not a six digit code, it just allows you to reset the password. Yourself. On your device.
Are you reading the second half of the sentence I posted? Sorry but I'm not understanding where you are coming from - Krebbs lays out clearly in the first paragraph how the attack works and you seem to be deliberately ignoring that.
> Ken didn’t know it when all this was happening (and it’s not at all obvious from the Apple prompts), but clicking “Allow” would not have allowed the attackers to change Ken’s password. Rather, clicking “Allow” displays a six digit PIN that must be entered on Ken’s device — allowing Ken to change his password. It appears that these rapid password reset prompts are being used to make a subsequent inbound phone call spoofing Apple more believable.
Anyone who edits news articles, blog posts or such without clearly disclosing the edit immediately loses my trust. It's a huge problem these days where everything is online instead of in print, but most people do not want to take responsibility for sloppy research or misleading reporting. And that's part of the reason why there is so much misinformation, it sometimes comes from trusted sources too, not just anonymous social media users.
However, in this case, the edit is disclosed at the bottom of the article. Do you think this isn't sufficient? Does the edit disclosure need to contain a link to a diff of the changes or does it need to be at the top?
If you look into the edits in Wayback Machine, you see that previously, the "Ken's experience" was:
"Unnerved by the idea that he could have rolled over on his watch while sleeping and allowed criminals to take over his Apple account, Ken said ..."
Once the article was updated, the original sentence implying that criminals could take over your account while you are sleeping was completely rewritten to say the 180 degree opposite - completely reversing what the initial sensational content said. In reality it is not possible to accidentally hand over your account to attackers by accidentally tapping Allow on your watch in your sleep.
The update disclosure only says: "Added perspective on Ken’s experience."
Fair, and good to know, but I could still easily see reasonable people (not just 80 yr olds with their Obamaphone) falling for this.
And even if not, there's a severe annoyance factor here that could be simply removed by Apple rate limiting these requests. Why can they send you hundreds of these in a short time?
This happened to me and my wife (each starting a few days apart) in 2021, or maybe 2022 but no later. It started with a couple requests a day, then ramped up to every hour or something. IIRC we also both got a couple SMS claiming to be from Apple.
As soon as it ramped up I set up both accounts to use recovery keys, which is a move I had planned anyway on grounds that it should not be in Apple's (or someone coercing/subverting Apple, be it law enforcement or a hacker) power to get access to our accounts. This obviously stopped the attackers dead in their track.
For similar reasons I set up advanced data protection as soon as it was available and disabled web access. Only trusted devices get to see our data, and only trusted devices get to enroll a new device.
If this is the threat vector you’re worried about, you shouldn’t have had anything in iCloud (or any cloud for that matter) to begin with, rendering this debate completely moot.
I want to be upset that you've made a comment so obvious, yet sadly, there will be people in the wild that don't understand the silos platforms build. However, I doubt any of them are here reading this, but you never know.
As for the "lose recovery key" situation is no different than hardware token 2FA + recovery codes. Print multiple copies and spread them to trusted third parties.
> A recovery key is an randomly generated 28-character code
That's easy to backup. You can even print it and bury it in a sealed box in the garden or put it in a book or whatever. It depends who you are protecting against.
As much as it can "weaken" security, an electronic backup is still recommended for most
Maybe I'm being dense (probably), but where would you save it?
iCloud? No, that doesn't work - you need the key to access iCloud.
Some other cloud storage service? No, that doesn't work - you need your phone to generate a token for access and your phone was destroyed in the same fire as the paper backup.
Seems like the safe choice is a lock box at a bank or similar. Or a fireproof safe at home.
Personally, I encrypt my backup/recovery/setup keys in a CSV file using a password that I have memorized, and send them to family members to store in their accounts/cloud storage.
But safety deposit boxes are a good choice too, just be careful to balance your own convenience. If you can't easily update your backups, you're really unlikely to include new accounts in them
> Some other cloud storage service? No, that doesn't work - you need your phone to generate a token for access
You definitely don't need your phone for access. I use Yubico security keys for everything like this. I have several of them that are on all my accounts and I don't keep them in the same place.
Engraved onto something like titanium would be better than a fireproof safe - they're only safe for X amount of time (I want to take a stab in the dark and say about 90 minutes?). This is how I have backed up some (since retired) crypto seed phrases in the past.
Where do you keep the titanium plate? I'd be more worried about losing it due to a natural disaster than merely having it destroyed beyond readability in a natural disaster.
What happens if there's a typo in the engraving? Who's doing the engraving? How much do you trust the people you are providing the key to do it? When does the paranoia kick in vs being diligent?
This was at least an innovation in the bitcoin community. Several assemble at home systems where you can build a physical manifestation of a secret. Metal cards you punch with a hammer and nail. Another is essentially a tube where you string along metal letters of the password.
Sure, sounds perfect. Let me send some crypto person that has invested in a home stamping kit the secret to my crypto wallet. At least they won't know what it's for to be able to hijack my wallet. phew. had me nervous that committing the cardinal sin of sharing my secret with someone I don't know isn't going to come back to haunt me.
Keep one copy in your fire-resistant safe at home. Then encrypt a copy, give the encrypted copy to your best friend and the decryption key to a family member, or keep one of these things in your desk at work. Neither of them have access unless they both figure out what it is and collude with each other, but you have a recovery system in case you lose your own copy.
One possibility is to encrypt a copy with a key that you are pretty sure you can remember, and store that encrypted copy someplace public on the web. Periodically check that you do still remember the key.
The conventional way to do this would be encrypt it with a symmetric cipher keyed from a password or passphrase. I've been using an unconventional approach where the secret you have to memorize is an algorithm rather than a password/phrase. Programmers might find an algorithm easier to memorize than a passphrase.
Here's an example of this general idea. The algorithm is going to be a hash. This one will take a count and a string, and output a hex string. In English the algorithm is:
hash the input string using sha512 giving a hex string
while count > 0
prepend the count and a "." to current hash and apply sha512
The recovery code I want to backup is 3FAEAB4D-BA00-4735-8010-ADF45B33B736.
I'd pick a count (say 1969) and a string (say "one giant leap for mankind"), actually implement that algorithm, run it on that input and string. That would give me a 512 bit number. I'd take "3FAEAB4D-BA00-4735-8010-ADF45B33B736" and turn it into a number too (by
treating at as 36 base 256 digits). I'd xor those two numbers, print the result in hex, and split it into 2 smaller strings so it wouldn't be annoyingly wide.
Then I'd save the input count, input string, and the output:
1969 one giant leap for mankind
ed428dffa23f4f14ae2a7b7e842019fc11b5726d726b96c11ec266758be67cb0
f2a78a320a85df809afe83c6c7840e2d175cceadb455260735405cd047459cc9
I'd then delete my code.
I could then do a variety of things with the "1969 one giant leap for mankind" and the two hex strings. Put then in my HN description. Include then in a Reddit comment. Put them on Pastebin. Take a screenshot of them and put it on Imgur.
To recover the code from one of those backups, the procedure is to implement the algorithm from above, run it with the count and string from the backup to get the 512 bit hash, take the 512 bits of hex from the backup, xor them, and then treat the bytes of the result as ASCII.
Then delete the implementation of the algorithm. With this approach the algorithm is the secret, so should never exist outside your head except when you are actually making or restoring from backup.
When picking the algorithm take into account the circumstances you might be in when you need to use it for recovery. Since you'd probably only be needing this if something so bad happened that you most of your devices and things like your fireproof safe, you might want to pick an algorithm that does not require a fancy computer setup or software that would not be in a basic operating system installation.
The algorithm from this example just needs a basic Unix-like system that you have shell access to:
Okay, and when your friend moves, and you buried it years ago, so they forgot to dig it up what with everything else going on in their life at moving time?
Never underestimate the security and safety of a hidden piece of paper! If it's good enough for wills for the last 500 years, it's good enough for a password.
I keep one-time keys between pages of some books on my shelf, and a copy in a safe deposit box. I suppose if I were publically known to have tons of money in "crypto" or were a target of a nation-state, this wouldn't be safe enough. But I think it's OK for my gmail and OneDrive, etc.
You can setup a recovery contact incase you do loose the key. I just set that up with my partner and the chance of loosing the key and both of us losing all of our apple devices I think is fairly slim.
I also stuck that key in 1Password (sure it's less safe, but if my 1Password was breached I have far bigger problems than this key being retrieved).
Then keep a hard copy in a safe. Been contemplating sending my parents a safe (who live several states away) with keys on a sheet of paper without context that only I have the combination too. But not sure yet.
Hard copy? edge the string in a hard surface. My favorite is a rock in my garden. The characters are facing the ground to shield from erosion. The visible surface of the rocks (all of them) is painted white for aesthetic.
Survives a fire, earthquake. No tornadoes or tsunamis here. Nobody has stolen any such rocks from here.
>Then keep a hard copy in a safe. Been contemplating sending my parents a safe (who live several states away) with keys on a sheet of paper without context that only I have the combination too. But not sure yet.
A friend of mine who was (maybe is? he knows I'm not a fan so we don't talk about it much) big into crypto stores his secrets in similar safes with trusted friends and family around the country. I think it's a good idea for things like this tbh.
I think it is a good idea in theory also, there I just that voice that says "well now that key is out of my possession" and it scares me a bit.
I think I might need to look up to see if there is a known pattern to these keys that it could be easily figured out what it is even if it is just on a sheet with no context. Particularly 1Password which I think is a pattern if I remember correctly.
Or, just apply some simple, easy to remember permutation to the key that no one would be likely to guess - eg rot13 the key, or add 1 to every character, move the first 14 characters of the key to the end of the key, etc.
Probably that the key has features that allows 1Password (and potentially anyone) to recognize that its a 1Password key. E.g. Fixed size, patterns of spaces or dashes, specific digits, embedded error correction, etc.
Similar to how a lot of package companies have a certain pattern, length, whatever for their tracking numbers. If there was a somewhat reliable way to say "This is a 1Password key" or "This is an iCloud key" it makes it means even without context it could be an issue.
You personally don’t know anyone who obviously discloses that they have a safe. If you have a safe you are keeping something valuable secure. The fewer people know that you have something valuable that needs to be secured the better. If people don’t even know your safe exists then that reduces the chances of it being compromised.
> When you set up a recovery key, you turn off Apple's standard account recovery process.
> However, if you lose your recovery key and can’t access one of your trusted devices, you'll be locked out of your account permanently.
I considered it before but I think it's just too much risk as I rely heavily on iCloud. On the other hand, I don't see the risk with the current method if you're smart enough not to fall for things like MFA bombing tactics.
The prompt UX should step into a special "bombed" mode when a frequency threshold is crossed, at which point accepting a prompt has fat-finger protection such as double confirmation steps, and declining all (or perhaps all that share a commonality, like same initiating IP address) becomes possible.
“ If you use Advanced Data Protection and set up both a recovery key and a recovery contact, you can use either your recovery key or recovery contact to regain access to your account.”
Such a high risk of being locked out permanently is more than most people can stomach. Why can't they offer a last-resort option like showing up in person at an Apple Store with government-issued photo ID?
Because they aren’t required to by law. I have filed comments with the FTC that this recovery path should be legally mandated for digital accounts, I encourage others to do the same. It doesn’t have to be an Apple Store (insider risk, see SIM swapping analogy); could be USPS or another government identity proofer they partner with. Login.gov uses USPS for in person identity proofing, for example.
Your data and account ownership interest doesn’t disappear because of failure to possess the right sequence of bytes or a string. Can you imagine if your real estate or securities ownership evaporated because you didn't have the right password? Silliness.
This should not be required by law because many people specifically don't want it. I'm content to keep my own redundant copies of a recovery key and suffer the consequences of my own actions, rather than allowing someone to steal my account just because they made a convincing fake ID or hacked some government system. In general centralized identity systems are a single point of failure and hooking more things into them is a bad thing.
> Your data and account ownership interest doesn’t disappear because of failure to possess the right sequence of bytes or a string.
Somehow you have to establish that you are the owner of the account, in a way that nobody else can do it. This is very much not a trivial problem, and government IDs don't provide any kind of solution to it.
If you need a driver's license, how do you get a driver's license? With a birth certificate? Okay, how do you get a copy of your birth certificate when you don't have a driver's license?
If there is a path to go from your house burning down and you having zero documents to you having a valid ID again without proving you've memorized or otherwise backed up any kind of secrets, an attacker can do the same thing and get an ID in your name. This is why identity theft is a thing in every system that relies on government ID. Requiring all systems to accept government ID is requiring all systems to be subject to identity theft.
I argue for and advocate that this capability should exist, but not be mandatory. If you do not want to tie your personal identity to your digital identity, certainly, you should be able to not do so and rely solely on a cryptographic primitive, recovery key, or other digital mechanism to govern access of last resort. If your account access is lost forever, it's on you and that was a choice that was made.
> Somehow you have to establish that you are the owner of the account, in a way that nobody else can do it. This is very much not a trivial problem, and government IDs don't provide any kind of solution to it.
This is actually very easy. You can identity proof someone through Stripe Identity [1] for ~$2/transaction. There are of course other private companies who will do this. You bind this identity to the digital identity once, when you have a high identity assurance level (IAL). Account recovery is then trivial.
> If you need a driver's license, how do you get a driver's license? With a birth certificate? Okay, how do you get a copy of your birth certificate when you don't have a driver's license?
This is government's problem luckily, not that of private companies who would need to offer account identity bootstrapping. Does the liquor store or bar care where you got your government ID? The notary? The bank? They do not, because they trust the government to issue these credentials. They simply require the state of federal government credential. Based on the amount of crypto fraud that has occurred (~$72B and counting [2]), government identity web of trust is much more robust than "not your keys, not your crypto" and similar digital only primitives.
NIST 800-63 should answer any questions you might have I have not already answered: https://pages.nist.gov/800-63-3/ (NIST Digital Identity Guidelines)
> This is actually very easy. You can identity proof someone through Stripe Identity [1] for ~$2/transaction.
"Pay someone else to do it" is easy in the sense that doing the hard thing is now somebody else's problem, not in the sense that doing it is not hard. That also seems like a compliance service -- you are required to KYC, service provides box-checking for the regulatory requirement -- not something that can actually determine if someone is using a fraudulent ID, e.g. because they breached some DMV or some other company's servers and now have access to their customers' IDs.
> This is government's problem luckily, not that of private companies who would need to offer account identity bootstrapping.
But it's actually the user's problem if it means the government's system has poor security and allows someone else to gain access to their account.
> Based on the amount of crypto fraud that has occurred (~$72B and counting [2]), government identity web of trust is much more robust than "not your keys, not your crypto" and similar digital only primitives.
The vast majority of these are from custodial services, i.e. the things that don't keep the important keys in the hands of the users. Notably this number (which is global) is less than the losses from identity theft in the US alone.
The general problem also stems from "crypto transactions are irreversible" rather than "crypto transactions are secured by secrets". Systems with irreversible transactions are suitable for storing and transferring moderate amounts of value, as for example the amount of ordinary cash a person might keep in their wallet. People storing a hundred million dollars in a crypto wallet and not physically securing the keys like they're a hundred million dollars in gold bars are the fools from the saying about fools and their money.
> If you need a driver's license, how do you get a driver's license? With a birth certificate? Okay, how do you get a copy of your birth certificate when you don't have a driver's license?
Using vitalchek, you can order a BC with a notarized document, using two people who have valid IDs as people to vouch for your identity. I've done it for multiple clients.
There also has to be someone that needs the BC to see the notary. But, for the most part, yes, it's that easy to obtain a BC using vitalchek.
Note: The notary will record the ID #s and other info of the two ID holders. So if something goes wrong, the two ID holders will be on the hook as well.
Once the notarized document is submitted to vitalchek, they'll process the request.
Of course, one would still have to know a few details from the BC (parents, location, etc) to get vitalchek to submit the request to the county/city registrar.
> Note: The notary will record the ID #s and other info of the two ID holders. So if something goes wrong, the two ID holders will be on the hook as well.
Though of course this is a method of fraudulently obtaining an official ID, so you do need to be concerned that the people engaged in that sort of enterprise might already have a couple of them.
> Of course, one would still have to know a few details from the BC (parents, location, etc) to get vitalchek to submit the request to the county/city registrar.
Which is the sort of thing that gets collected in big databases which then get breached and published on the internet.
Well previously when stock trades involved exchanging physical certificates, I could imagine that ownership could evaporate if you lost that piece of paper. Or just think about cash: you do lose that ownership when you lose that magical piece of paper. It's a simpler world when what you have physically determines what you own.
People want a just world (imho, n=1, based on all available evidence, etc), recourse, and protections, not a simple world. Interestingly, cash will likely be the last to go in the near future from a “possession of value” as the world goes cashless (although whether this is "good" or "bad" can be argued in another thread).
There's a wide set of possible approaches between "let any employee validate any ID" and "never let someone into an account that they have lost the credential to."
E.g. you could make it costly to attempt, require a notarized proof of identity -and- showing up at the Apple store, and enforce a n-day waiting period. A different employee does the unlock (from a customer service queue) than accepts the paperwork.
We don't lock people out of financial accounts forever when they forget a credential. It could definitely be solved for other types of accounts.
Have you seen how easy it is to get fake government ID? It’s damn near a rite of passage for teenagers so they can buy alcohol. $20-$50 if you know the right person or can wander the dark web right.
I’m not sure you want that to be the absolute best digital security you can get.
Yes it is vulnerable to an attacker who is willing to present himself in person with a fake ID to target a specific account. However it's not scalable or remotely exploitable.
Since it requires a human looking at an ID and then pressing a button, the system triggered by the button press is likely quite exploitable no? Or even worse, scanning and storing an ID, which allows spoofing if those get compromised.
Recovery key isn’t susceptible to that - and isn’t susceptible to fake-id-spotting-ability or bribeability of staff either.
Interesting that using the recovery key stopped the issue for you, but does not seem to do its job now.
From the article "Ken said he enabled a recovery key for his account as instructed, but that it hasn’t stopped the unbidden system alerts from appearing on all of his devices every few days.
KrebsOnSecurity tested Ken’s experience, and can confirm that enabling a recovery key does nothing to stop a password reset prompt from being sent to associated Apple devices. "
A password reset prompt is sent to the devices, but unfortunately the article leaves out that the prompt only enables you to reset the password on the device that receives the prompt. So it is not a security issue, just an annoyance.
It's not a recent approach, but this is a recent campaign using it against many people. Someone likely got a list of hacked passwords from some recent dump and is going through the apple accounts from it.
I ventured as much. Given the amount of messages and the personal details gathered, I also guess attacker tools have significantly been improved or streamlined.
But is it the case that the Yubikey is essentially treated the same as a trusted device? What if I want to untrust my devices and only trust ubikeys (without removing the device from my icloud account?)
Yes but my understanding is that you can remove the Yubikey without possessing it, just with a “trusted device”. I want to mark all of my devices untrusted (wrt icloud account changes) and rely only on Yubikeys
Wow! You'd think they'd rate limit these! Once you've done it twice, go to once every 15 minutes, then hour, then 4 hours, than day, etc. Like bad logins.
Krebs notes that the recovery form does have some form of CAPTCHA on them, which mostly just goes to show that CAPTCHA systems are a poor and increasingly deficient rate limiter.
ETA: Also from a user experience even once a week between attempts is still enough to deeply annoy a user getting popups on their devices. This is one of those cases where rate limits probably still can't solve the user irritation.
That message is horribly designed if it allows a password reset to happen on any other device after you click allow. It specifically says "Use this iPhone to reset". I'd have assumed it asks the person who clicked allow to set a new password, on the same device they clicked allow.
Then again if it shows on the watch too (and isn't just mirroring a phone notification, since it ignores quiet mode), I can't imagine the idea is you click allow on your watch and then type a password on its keyboard?
I don't think there's any danger in clicking "allow." There's still a 2FA step after that, and then you have to choose a new password. All of the danger comes from the phone call, where they presumably try to wheedle the 2FA code from you.
> That message is horribly designed if it allows a password reset to happen on any other device after you click allow
This was a lifesaver when my 90 year old mother forget her iMac password (and I forgot that I had created a second admin account on her machine.) After getting locked out of the iMac, we were able to reset it because we were able to get into her iPad (which she forgot the pin to, but fortunately we found it written down.)
At some point the ability to trigger these prompts (or ones like them, like the Bluetooth-based setup new device prompts that were in the news last year) on Apple devices is itself the problem right?
Obviously it must be possible to reset ones password, but from the article it's apparently possible to make 30 requests to reset ones password in a short amount of time.
What possible non-malicious reason could there be for that to happen?
None, it's just that they haven't bothered adding a check for them. This isn't necessarily an indictment of them. It make sense in hindsight, but between sprints, OKRs/KPIs, and promotion packets, it's easy to let non-sexy functionality like these slip through the cracks.
It's distressing and sad that we've come to expect so little from the trillion-dollar market cap companies to which we are beholden to participate in modernity.
It's not as alarming if we just reframe it. Apple's software is written by developers, like many HN readers, and they follow similar interal processes. There is nothing inherent about having a large market cap that makes everyone involved superhuman. Some issues always slip through the cracks.
I'm surprised to see this comment on HN where many readers see how the sausage is made. There's no secret sauce, no matter how far up in FAANG/MANGA you get.
There is already a variant where they try to get someone to say „yes“ and just use a recording of it to use as „proof“ that you agreed to some contract.
I actually don’t answer unknown callers with “hello” or any words actually. I simply just say “mmmhhmm” or make a dumb sound if it is automated it will trigger the automatic message. Someone asked why and I said voice cloning software they said wtf you have nothing to steal. Just feels risky idk why.
You probably not going to get a voice clone from someone saying "hello?" 100 times. However, you don't really need to "MFA Bomb" people to clone their voice, just call them with a plausible sounding reason that will cause them to engage in an extended conversation (eg. "hey this is your uber/doordash driver/doctor/school/daycare).
Another reason to not to use phone (or the numbers) calls to verify users even with so called 'voice identification or voice ID' which can easily be broken with advanced voice cloning.
Recently I was baffled how far we've come with this. It doesn't work perfectly, but could be enough to fool someone. Just one pip install and a short voice sample away: https://github.com/coqui-ai/TTS
I am confused. What does happen after clicking allow? Does Apple just provide a password reset form to the person on the iForgot website or does it show up only on the device?
> he received a call on his iPhone that said it was from Apple Support (the number displayed was 1-800-275-2273, Apple’s real customer support line)
This happened to me exactly once, and it was two days after I ordered a new MacBook from the online Apple Store. Since I was expecting a shipment, I almost picked it up. But instead I called Apple Support myself, and asked if they had called me, and they said they had not.
It was in December 2020, so maybe? I can't remember. I had just incorporated an Inc. (and received seed funds, but those weren't publicly available or announced), so maybe some of that info was a trigger. But it was surprisingly well-timed, for sure. It was within three days of placing the order.
I suppose it could have also been as simple as "it's Christmas shopping time." I remember what was most surprising was seeing the caller ID, which I think was actually "Apple Inc," and which was saved as a contact in my phone.
The problem with adding rate limits, at least a global per user rate limit, is that you then create a new denial of service issue, preventing people from being able to recover their account.
If you’re getting DOSed by identical prompts you already can’t recover your account since you’ll likely hit the wrong prompt. There’s no protection here against an MFA fatigue attack attempt.
Why? You can rate limit the business logic but still show the user the default flow.
For example: if a user is requesting a reset password link 10 times a minute you can just send the link one time but display everytime that a reset link was sent by email.
This flow is a bit different from a password reset email, it's a notification with a direct call to action, allow or deny.
You can't debounce them like you can with a reset password email flow.
With a typical password reset email, the actual password resetting is done by the user after they click the link in the email, only someone with access to the email can proceed, and they can only proceed on the same device that they clicked the email link.
In this flow, there is no further on-device interaction.
You're telling me Facebook, with its billions of dollars and leetcode interviews, can't figure this out? That is outside the realm of computable functions?
Fidelity are clowns. They've spent an impressive effort breaking every god damn third party integration AND using Akamai to block scraping. I can scrape Ameriprise fine, but no matter how creative I get Fidelity gives back a weird error on login.
(This is on top of them not sending any actionable email when changing my contributions to 0 in between pay periods)
I'm rolling my 401k out as often and fast as possible. I hate American banks so much.
Would https://github.com/lwthiker/curl-impersonate help? Haven’t tried with Akamai, but did help with another widely used CDN that shall remain unnamed (but has successfully infused me with burning hate for their products after a couple of years’ worth of using an always-on VPN to bypass Internet censorship and/or a slightly unusual browser).
Afaict, it drives a stock Chromium instance. I'm not sure how Fidelity is detecting it, but they detect it even in normal headful mode. Idk if there's some JS that notices there's no mouse-move movements.
It's just not worth the headache. I despise bending over backwards for companies like this. But obviously I have no choice since they're my 401k plan facilitator.
> Fidelity are clowns. They've spent an impressive effort breaking every god damn third party integration AND using Akamai to block scraping.
What’s funny/sad is they probably pat themselves on their back thinking their security is so advanced and awesome. Financial services web integrations are all total clown shows.
but can't you buy API access? I would assume that's more of a business decision to promote paid for API access, rather than "security" against scraping.
it wouldn't be hard to add to the app though. obviously if you get a flood it's bullshit and more than a couple can be ignored. It's not rocket surgery
I've been getting these on my LinkedIn account since a couple of days. Every few hours I get an email with a magic login link. They seem legitimate, originating from various locations around the globe.
Happened to me yesterday, I was baffled but then I found that you can request the one time password just using the email associated with the LinkedIn account, so the password wasn't compromised
I have changed the password, main mail and in the privacy settings of LinkedIn removed the visibility of the email
Do you have a source for that? Or any more info? It’s not that I doubt it, I ask because some details like my work email, job title and place of employment has been leaking into the hands of marketing companies and I an trying to figure out how.
Your own company could've sold it to data brokers. Look into Equifax's Work Number score, it includes fun things like where you worked and how much you made. But no, let's not unionize or anything.
They mention "FIDO® Certified* security keys", this presumably means physical keys only, and not soft keys like the ones that keepassxc/bitwarden provides? If so that might be too much of a hassle for me. I care about my security, but I don't care enough to drop $100 on 3 separate security keys, and finding 3 separate places to keep them secure.
But yes I wish you could use one hardware key as backup and one software key for day-to-day usage, or at least the security key in a trusted device (up to you to have a circular dependency to your main device or not).
At least on for icloud sign ins (not sure about password resets, too lazy to check), clicking "allow" doesn't allow the sign in, it only displays a 6 digit code that you have to enter to log in.
he received a call on his iPhone that said it was from Apple support.
"I said I would call them back and hung up," Chris said, demonstrating the proper response to such unbidden solicitations."
We're long-conditioned to assume that calling a large company and reaching a human will be difficult to impossible - and if we succeed, it will be an unpleasant experience. Much more so for a major tech company.
As far as this scam succeeds, it's partially due to intentional business designs.
A few weeks ago, we had a major problem with our Apple developer account (which is registered to my name). For days, I tried everything to avoid calling customer support (for the above reasons) and only agreed when our release team started panicking. I was more than surprised how incredibly good Apple‘s support team was. Recovering from the problem was quite difficult (and the circumstances that lead to it made me question Apple’s SW dev capabilities), but the support experience was simply perfect.
I think it is just a transition period until they can get rid of models with the switch in their lineup. Since the action button is now configurable, it could soon turn back into just focus modes as the configurable way to silence your phone
This seems like it is entirely a human problem, not any kind of technical failure. The fix is the same as it always was -- people need to be trained to say no by default, do not trust inbound calls ever, and never ever share your credentials.
If you follow that advice, this attack poses no risk other than annoyance. If you do not give your password to the creep who calls you claiming to be apple support, you will be okay.
A system that lets an attacker send hundreds of push notifications, effectively making a phone unusable until you click "allow" is a technical failure. So is one that lets an attacker spoof Apple's caller ID. Sure, that one is a failure with caller ID in general, but it's not beyond Apple's ability to special-case its own numbers.
> people need to be trained to say no by default, do not trust inbound calls ever
This really sucks though. It basically means that our current phone system is inherently broken and something that was potentially useful before is no longer useful due to malicious actors.
This happened to me about 2 yrs ago. It catches you off guard when you receive a spoofed call from Apple Care as you are being bombarded with PW reset requests from your iCloud. Of course, the hacker is really good and answers all the Apple-related questions fluidly. I believe my account data came from the big Ledger hack, so they were targeting crypto holders. iCloud security was so weak back then!
I've been too immersed in university happenings recently. It took me clicking on the link and reading until "password reset feature" to realize that this wasn't some bizarre phishing attack involving Masters of Fine Arts degrees.
I’m still disappointed by Apples implementation of security keys. I want to be able to prevent all 2FA methods other than security keys, but it still seems possible in certain flows to authorise a new login with another iOS device making it vulnerable to this attack.
Interesting. I was contemplating moving to security keys (which according to the setup flow "replaces verification codes" but IIUC you're saying one can still fall back to verification codes in some flows?
If I was doing something that needed heavy security, but I'm just a boring average joe. My critical accounts are protected by TOTP on one (backed up) device only, other things are kind of "good enough" with passkeys and passwords. If I ever become a criminal mastermind or double agent I'll probably dive into such methods though.
Yet another reason why phone number verification is the most insecure way to verify users and it doesn't matter if a company like Apple is using it or your bank using so called 'Military grade encryption'. The point still stands [4] with countless examples [0] [1] [2] [3].
Unless you want your users to be SIM swapped, there is no reason to use phone numbers for logins, verification and 2FA.
The gp proposes a different "private identification string" that's not public. Public IDs such as "email address" or "phone number" are susceptible to what this article is talking about.
> On the official Apple reset form, the "phone number" is one of the id options the hackers can use to MFA bomb the target
Funny thing is you cannot set a passphrase or equivalent recovery code unless you have an apple device. So users who have an apple account for development purposes (I hate apple device UX and wont ever use anything apple again other than to approve releases and manage certificates) and have no apple products are cursed to use ones phone number.
I used to be hardcore about stuff like this, but as I grew older I guess I gave up some of my morality and bought things like $150 iphone # and moved on with life if it was making me $$$.
Given that the gp was talking about victims being "SIM swapped", I strongly suspect he's referring to the classic sim swap attack where you sim swap, then use the newly registered sim to receive a password reset code. If it just involves discovering your phone number, you wouldn't need to sim swap at all.
>The gp proposes a different "private identification string" that's not public. Public IDs such as "email address" or "phone number" are susceptible to what this article is talking about.
This is a non-starter for the general public. If they can barely remember their password what are the chances they'll remember a "private identification string" or whatever?
That is not true. Please read article, he even bought new phone, and this did not stop attack, because of same phone number.
I woul not even call this MFA attack, as they did not need his password. It is more like recovery password attack.
TFA talks specifically about a victim buying a brand new phone, registering a new appleid, and getting MFA bombed immediately when putting in his old SIM...
> and getting MFA bombed immediately when putting in his old SIM...
I think it’s technically unrelated to the SIM, but rather to create the new Apple ID he used his existing (compromised, lol) phone number for “verification” or something. Which is weird in a way because then Apple must allow multiple accounts per phone number?
I think we should start doing product liability lawsuits to any organization capable of having user financial data affected from their account, that is using SMS one time codes as either default, enabled by default, and the heaviest legal remedies to financial organizations where that's the only option
we should also update PCI DSS compliance or whatever relevant security standard to call SMS one time codes totally insecure
we can also reach insurers these companies use and tell them to force removal of SMS one time codes
do a multi pronged assault on SMS one time passcodes
I think the more urgent thing is to not use the social security number both as the ultimate secret, and also as a number you must give to hundreds of people.
That. I'm in favor of stopping this societal wave of making phone numbers the equivalent of digital SSNs (they're critical for digital life, everyone wants them, nothing good happens when you hand them out that freely).
Well if you fine companies for using SMS for security… you should put the CEO in jail for authenticating with social security number… if we go by just the number of people who get affected by skimmed SMS and by stolen ssn.
Never will happen on the consumer side. Consumer lose their device way to often to make TOTP or pass codes viable.
Financial institutions can detect if your phone number has been ported or forwarded.
Bigger threat is phishing and password sharing between accounts. I ran tech at investment firm/ neo bank and never saw an attack on sms 2FA and we had over a million customers. We had email 2FA for a while there was significant number of people who shared passwords between email and their bank.
It still seems wrong to me that we, as a society, have basically accepting this level of crime as just a constant sort of background noise in daily life.
The lack of rate limiting is surprising, either on the server side or the OS side (or both).
I mean they already lock my iPhone after too many failed attempts with my passcode and it gets longer each time, I feel like the lock here should be the same.
I think the way the attacker probes if victim is using an iPhone is they Message SPAM using Beeper-style use of Messages servers and interpreting error codes.
I am posting this review here because I want to be of help to everyone out there, who in one or two ways has been scammed by online bitcoin
investment platforms. After going through a lot to recover my bitcoin although many people told me it’s impossible. If you've lost your bitcoin as a result of investing in binary options, trading platforms, your account was hacked or other bitcoin related scams or lost money to scammers online in whichever ways then You’re not alone. I lost $97,950 to skyrockettrade. Being a scam victim myself, I tried several means to recover my funds all to no avail, till I came across a Cyber Asset Recovery. He literally saved my life, all i lost to these fake investors skyrockettrade was recouped in just a few days (a total of $97,950 USD was recovered, Kindly send a message to the contact below if you’ve been in such situations and you are seeking to recover your funds
The fatigue part: if you clicked allow, and the hackers called you for the second step, but you responded "I understand you're a hacker and are wanting to steal from me in some way, but I am only going to give you incorrect pin numbers, so please stop with the reset dialogs and update your database not to try it again with me" .. would they stop? /s
Quite shocking how oblivious a lot of ostensibly tech savvy people are to the existence of hardware security tokens. Yubikeys have been around for over 15 years now, although Apple only added support for hardware tokens recently.
B-but iPhones are secure and are the best and Apple spends so much money on security to keep us safe and don't need any government/EU oversight at all. Proof that Apple's "it's for your own good" has always just been marketing.
(Don't get me wrong, let's go after Google, MS, Sony, et al too!!!)
It's surprising that Krebs chose to omit this little detail in the security blog and instead seemed to confirm that someone could completely give away access to their account while sleeping.