I work as a sales rep in-store for a telco. From a security perspective, it's ridiculous.
We use computer monitors which customers face from the same angle as us. I'm sure someone thought it would make the retail scenario more inclusive, but security-wise it's a mess. I can't verify account details without pulling up those same details for the customer to see. So I ask people for their details, click the button, and cross my fingers that they're right. If they're wrong, what then? They might legitimately not have known whose name it was under. It might be under their dad, mom, partner or business' name. Doesn't matter, the system has absolutely no design affordances to allow multiple people various levels of security privilege in accessing and altering accounts which are used by more than one person.
Furthermore, we have no organisational clarity about access privileges. Everyone makes up their own standards. Some people in the company are very strict, and won't do a SIM swap without photo ID or full ID over the phone. Some people will do one if the customer quotes the same last name and could be theoretically the account-holder's child. But does it matter when any customer can easily find out name, DOB and address from coming in store, then call up and get the SIM changed over the phone? We do have account PINs but very few people set them. And you could find it out in store if you were sharp-eyed.
There's a constant tension between providing a good customer experience and protecting security and privacy. But our commission is based partly on customer experience feedback scores - and if you're the one asshole who tries to follow all the rules (or follow what you decide should be the rules, because there aren't any haha) then you're gunna get a) bad feedback and b) alienate and make life difficult for the majority of ambiguous security events, which I'm sure are 95-99% trustworthy people.
Anyone relying on two-factor auth with a phone number who uses my company is vulnerable. Simple as that. It would take a determined attacker a day to get control of your number. All you'd notice was that your SIM stopped working. It would all be too late by the time you'd gotten a new one re-activated - and you're still vulnerable.
I'm not sure what telcos are like in other countries but I doubt much better.
I feel bad for the telcos (and other agencies that try to keep our private info). I called up my ISP a few months back and was presented with a variety of security questions that I couldn't provide the answer to. I certainly didn't know the 4 digit passcode I created 2+ years ago and I haven't used since. My fist couple guesses on my favorite movie were wrong. It was only after my second guess of my best friend during elementary school (probably worth a blog post on the changing winds of our memory) that I was able to access my account. The problem was that I threw all sorts of answers at the customer service rep on the other end of the phone. They were willing to ignore all of my incorrect guesses in hope I would eventually hit on something they could verify. But that is exactly the problem. If I wasn't me I wouldn't want someone getting as many opportunities that I got to eventually hit the right answer to prove they are me. So where do you draw the line between customer support and customer security without either enraging real customers or allowing people to illegally access customer accounts?
TL;DR Someone create a startup to better identify people remotely.
A mobile carrier's identity verification could be augmented by asking questions about who you called recently.
Remote identity verification over the internet is not solved perfectly, but FIDO's U2F is pretty good. Hardware tokens cost money which most people won't buy, which is one problem. To prevent getting locked out you have to buy (and the service has to support) multiple hardware tokens, but that protects against loss or breakage. To prevent targeted token theft attacks, the token needs some kind of biometric verification (iris scans would be good), but that gets very expensive in a device that needs to be reliable and yet kept on a keychain.
That or something similar is the only way to provide verification while preventing the creation of a centralized identity database [I think a cryptographically assured identity verification system that dramatically limits identity repudiation would turn into a privacy nightmare dwarfing current identity database efforts being made by many companies]. With something U2F-like, each company or service stores their own verification seed value that is used in the future to verify you. It could be on a mobile device, like standard TOTP auth, or on a separate specialized hardware token. It could use pre-shared seed values and hashing, or nonces and asymmetric crypto.
"A mobile carrier's identity verification could be augmented by asking questions about who you called recently."
Except that could be socially engineered pretty easily.
Plus, if your phone dies, you're in for a major inconvenience, because no one remembers actual phone numbers any more. I remember the number I had as a kid, but nowadays, even though my mom lives in the same place, I reach her only through VoIP or cell, and both those numbers are stored on my phone, not in my head.
All your excellent examples will dwarf mine, but I'll still tell it as a very cheap medium:
When I was at Fortis Luxembourg, the bank gave me a passive token: A card with a few dozen digits on it. At each login it would request 3 of those along with the password.
The key point of this is, it never transmitted the full key over the wire. So someone who intercepted the communication could never rebuild my full password.
Cost for the bank? A few cents. Security? The best I ever had from banks.
I'm baffled. In Germany, the chipTAN method [1] [2] is pretty standard, which uses the bank card as a cryptographic element. And usually, German IT seems to be years behind the industry standard (e.g. I don't know a popular German e-mail provider that offers 2FA.)
[Edit] This is the best thing about chipTAN: Even if the computer is subverted by a trojan, or if a man-in-the-middle attack occurs, the TAN generated is only valid for the transaction confirmed by the user on the screen of the TAN generator, therefore modifying a transaction retroactively would cause the TAN to be invalid. [/Edit]
So the chipTAN generator reads the details of the transaction optically, and then you just confirm them ?
Pretty clever. On my chipTAN (Belgium, ING) I have to enter the number by hand (part of the account# of recipient, amount).
On the positive side mine does ask for a PIN before generating the TAN, so is probably a bit more secure (balanced with a wear of the keys on the TAN generator, of course - so it is arguable which one is better)
Unfortunately, that sort of static information frequently is targeted for phishing. The bank can keep telling people that they will never ask for all the codes at once, but some subset of customers will happily comply with such a request in a badly written email.
Mind you, dynamic 2FA frequently only narrows the time window in which phishing is effective. Even with transaction-based 2FA, you'd need people to actually read the text message the bank sends them with the transaction authorisation code.
After enabling 2FA, disabling SMS for 2-step and SMS for password resets, and ensuring that you don't have any phone number set as a way to get into you account, what is your plan for continuing to use your account if your phone is stolen?
It's also possible to install the seed for the TOPT generator on multiple devices - all the ones I've bumped into have a mechanism for typing in a long-ish string as well as scanning a QR code - record that string (secured like a password, in something like 1Password) and you can always re-seed another device to come up with the same codes. I've got all mine on two phones and a iPad - one of the phones is usually in my pocket, the other is almost always at home.
As always, it's a security/convenience tradeoff - I've gone from needing "something I know and something I have" to "something I know and any one of several things I have".
Your tradeoffs there may vary - if I were a political-dissident/whistleblower/drug-czar I'd probably consider the risk of losing access altogether preferable to opening up additional avenues for vulnerabilities - an NSA-level adversary would probably have a significantly easier time if they knew they only needed to stealthily subvert one of several devices (at least one of which I don't usually have on my person) to get access to all my tfa secured assets, but the additional risk if I'm protecting myself from 4chan-grade griefers or non-network-pervasive internet criminals is - for me - low enough to accept for the additional reliability and convenience of multiple authorised tfa token generating devices.
For all the sites that use TOPT, I have a screenshot of the QR code that was presented me, encrypted with GPG (using a symmetric key and a random password) and then I put that encrypted file in my 1Password collection.
I feel reasonably secure about this (as secure as I'm feeling about all the passwords already there in 1password) and I have a huge advantage that changing my phone won't require remembering to disassociate all accounts first if I don't want to lose access to them.
As TOPT works without a back-channel, that QR code stays useable until I manually revoke that key on the respective web site.
In my experience, when setting up a new device, you have to scan the QR or type in a code, then verify a generated key or two to "confirm" the new device. I'm not sure if that's an optional step, but it seems like you'd need to log in first, thus creating a chicken-egg situation for yourself. I'm sure you could enroll another device (e.g. tablet that always stays in the house, SO's phone, whatever), but it doesn't seem like it'd work as you spelled it out.
Backup codes may be a good option if kept somewhere very safe.
The "enter a generated code to confirm" step is to confirm at the server end that you've got an identical seed - they (presumably) use that before committing that seed to your user account (to ensure you aren't about to lock yourself out). It's mot needed at the client end.
I've got at least gmail, aws(/amazon), Github, Dropbox, Zoho, and several TOTP TFA protected WordPress sites on 3 different devices using this method. It definitely works. I see additional devices start to generate the same codes when I add the same seed (so long as their clocks are reasonable synced...)
This is using the Google Authenticatior app on iOS and Android, I _think_ any RFC6238 compliant TOTP app that lets you type in a string to key it should "just work".
I have a similar method. When I setup 2FA on an account, I print out the QR code and scan this with the phone to verify it works. I then store the paper QR code in a safe place.
So you also need to make sure that your phone's browser doesn't have your Google password stored, and/or your phone's storage is encrypted with a strong-enough key.
Everytime I go to https://www.google.com/settings/security and click on 2-step verification, I'm required to enter my password if I haven't done so in the last 5 min or so.
With this scheme someone can't access your account by stealing your phone. You also can't access your account by getting your phone number to point to your new phone though.
I didn't downvote. My reply was to "other trusted devices can bypass 2factor" about yes they can access the account but they can't change the password without knowing the current password.
(Accidentally deleted a comment of mine, this attempts to copy it)
'"SMS is not designed to be a secure communications channel and should not be used by banks for electronic funds transfer authentication," Stanton told iTnews this week.'
Make sure you disable it in BOTH spots, or you are still vulnerable! Disabling mobile for account recovery still leaves it for 2FA. You need to do both.
What strikes me most in these stories, is how you always have to find some higher ranking company employee through personal connections in order to get a tiny possibility to take your account back.
These companies build on their users but, when their users need them, they betray them.
People need to be much more aware of the fact that you don't own your gmail address, or your Twitter/Facebook/LinkedIn/Instagram/whatever account. Those companies encourage people to build their reputations and networks and "personal brands" inside their walled gardens, while repeatedly demonstrating that they won't lift a finger to help protect the user's custodianship of "their" usernames.
Unfortunately - when you explain this to people there's no really good answer to their immediate "so what should I do?" question.
I no more "own" the bigiain.com domain than I own "bigiain" on HN, or "bigiain@gmail.com". While I can ensure I keep paying for it's registration, I have no doubt that if Monsanto or Goldman Sachs or Apple launched an new thing and trademarked it "Bigiain", my registrar would fold instantly to a legal demand from their lawyers, and I'd be just as out-in-the-cold as all those people without friends-of-friends in high enough places at Instafacetwigoo to "fix things", or with publicity platforms like @mat behind them.
I suspect in the future, there'll be a well known way to tie your online activity/reputation/network to a strong public key (with some distributed blockchain-like revocation/renewal audit trail). If anyone's working on something like that - I'd love to hear about it...
I have been thinking about this quite a bit, recently, as well and I do think the future does look like something you described. However, I do have doubts. Outside of being worried about the big brands removing your access to your hard earned reputations (which seems unlikely on a mass scale), what would be the other common uses cases for a crypto identity key? As we know, in order for a majority of people to adopt new technologies, there has to be a very compelling use case. I am not sure a consolidated identity key solves any real problem or rather it is just a cool tech thing that us hackers would like to see, kind of similar to the problem that bitcoin in general is having in achieving adoption.
I am curious if you guys have any really good thoughts on products that could implement a crypto identity key that solves a real life problem. Would love to discuss.
I think it's a good idea to own your own domain name, at least as a tech savvy user. You can still use Google Apps with it (Google for work now?).
That being said, I think it's a bit unfair to say companies won't lift a finger to help protect their users usernames. On the technical security level, many companies put a lot of effort into things like 2F, general internet security, etc. In particular Google, but also Dropbox, github, and others. On the service level (i.e. what happens when you have to talk to someone) everybody could probably improve quite a bit. OTOH that's costly and would ultimately need to be paid for by the customers somehow.
On the legal level, there isn't really anything these companies could do for you. If you do not own a trademark for your chosen domain name (account name, page name, ...), you'll lose it to someone who does [0]. That also won't change if you have all kinds of friends in all kinds of places - your problem then is basic trademark law, not the goodwill of some company (that has to adhere to the law, after all).
Disclaimer: I work for Google.
[0] possibly with the exception of the account or domain name being your legal name, but I don't think there's a general norm for that.
Technical measures to prevent account theft are always welcomed but they stop there; at prevention.
As most of us know through experience though, poop happens.
In our era, for many people an account at an online social network is part of their identity. Losing it can be devastating. An account at Google is even more; it is one's documents, emails, contacts, calendar, photos, various data and digital purchases.
So it is very important that there is support when you need it. Is it really so costly? I don't know. How many cases of account theft are there every day if the technical (prevention) measures are good? Maybe affected users are willing to cover some of them?
I have no doubt that if Monsanto or Goldman Sachs or Apple launched an new thing and trademarked it "Bigiain", my registrar would fold instantly to a legal demand from their lawyers
That particular problem can be solved by getting a domain that nobody else would want. In my case, I've registered my first name+last name.com, which will certainly never be considered for a trademark.
This just happened to me. The same timeframe, the same vector of attack, but a different target. They wanted my Twitter handle. Fortunately it was an old handle that Twitter had locked down and was not transferable. The hacker succeeded in making me lose my handle for a few days, but some friends came to my aid and I was able to get resolution through Twitter support.
My telecom company was helpful at first, but then we began to see circle-the-wagons behavior from them. We were at least able to get the call forwarding off of the account, but they would not tell us any details about what had happened on the account.
Until your story (and even now) I'm not exactly sure if my hacker had been able to forward the text messages or simply routed the phone call to his phone and using Google's password reset process was able to get a robo call to accomplish the same thing.
All of this is seriously making me consider creating my own 2FA service, only slightly better.
One quick recommendation I would add would be to put a passcode on your account with your mobile provider. Just call them and say "I'd like to add a passcode to my account", so you can at least add one extra layer of security there.
Every cell phone carrier out there CAN NOT allow you to make changes to an account or get any personal information from an account without properly identifying yourself with an authorized name on the account, plus the last four of the account holders social OR an account PIN. If a customer can not provide these over the phone they need to visit a store with a photo ID matching the account holder to get access to the account.
Note, this isn't specific to any carrier, this is FCC regulations that poorly trained CSR's ignore.
Unfortunately it is not that hard to get someone's first name, last name, and their entire SSN (not just the last 4 digits), especially if you have a bit of money to spend and know the right places to look.
Personal aside: does anyone know if I could call Verizon support and ask them to require a specific passcode be used before accepting any call relating to my account? Before I call and ask, I'd like to know the odds of them actually agreeing and actually abiding by it.
- account recovery (using only a phone) THIS IS DUMB.
I only use an alternate email for recovery (my wife and I cross). Thus, each recovery account is still 2FA secured.
There's already been a story floating around about a young kid charging his dad's credit card because of the phone recovery option (he had the android phone in this case). This is NOT the same as 2FA auth.
There's a balance between keeping others out and preventing yourself being locked out. Every time you add another factor, you also have to add another recovery option in case you lose that factor:
:) Hacker must break (A and (B or C)) or (D and (B or C))
:) Losing (A and D) or (B and C) locks you out
Only the 6th option is unambiguously better than a single password. I guess using a friend's phone for password recovery and your own for 2FA would achieve that.
You could also have 2 SIM cards in your phone, one number known, one for additional business. A lot of phones have sockets for 2 SIM cards, and the cost is almost nothing.
"and every so often, I would get authorization code texts for the Gmail account that was tied to my Instagram handle"
As far as I know these authorization texts are only sent when your Gmail username and password have been entered correctly. This would indicate that the attacker knew your long random password. Keylogger? From there they only need your 2fa to access your account.
If I'm not mistaken, the attacker set call/msg forwarding on his phone via his telco and then they chose the "forgot my password" option where a SMS text from Google (now going to attacker's phone) can be used to reset the password.
Well I heard from a friend of mine that in Argentina the cellphone provider can access to your info.
The case was this one. He was cheating her girlfriend, a friend of her accessed to my friend's text messages log, saw the evidence, and told to the gf about it. Apparently, but I never confirmed this, the friend (the one who read the messages) worked in the cellphone provider of my friend.
Since then I know I can't trust in my cellphone ever again, but I always was suspicious about this could be possible.
This is why I always recommend against using SMS-based 2-factor. Without even doing any serious research, it seemed pretty obvious to me from day one that at the very least someone like NSA/FBI could forge your number somehow with or without the carrier's help, but there's also the potential for other attackers to do it, too.
Call forwarding didn't even cross my mind, but it just goes to show how ridiculously broken SMS-based two-factor authentication really is then, and even worse than I thought.
Ideally what I'd want is an NFC ring or a smart band/watch that can use FIDO's U2F or a similar protocol that works through NFC, to do 2-step verification for me.
This is a good reminder that your phone may not be as secure as you think. In many countries governments are able to get access or ask for this type of change to be made from the national telco's.
The reactions you can take at the moment are to use a mobile App, (or preferably a security key!) rather than SMS backup, and if You're feeling especially uncharitable to your phone company, change the backup number google makes you enter to a google voice number rather than that of your actual phone - creating a circular situation where it can't really be used as a method for account recovery / hijacking.
This article brings up a question about protecting email addresses that I'm hoping a HN reader can answer.
I have a unique email address for PayPal--different from my normal email address--that I want to keep secret. The problem is that every time I make a purchase, the merchant gets this email address (in addition to the normal email address I gave to the merchant). I know that merchants get it because I get junk mail at my secret PayPal address from merchants I did business with.
Is there no way to make a PayPal payment without PayPal handing my email address over to the merchant?
As a related question, why do I have to trust the merchant to redirect me to PayPal's website to make the payment? There are many ways I can get fooled into entering my PayPal password directly into merchant's website (for example, the merchant opens the PayPal site in a frame or pop-up, so you can't verify that it's really PayPal). Isn't there a way I can open my own browser window, login to PayPal, and give some sort of invoice number to PayPal to direct payment to the merchant?
>(for example, the merchant opens the PayPal site in a frame or pop-up, so you can't verify that it's really PayPal) //
You can right-click the page in Firefox and choose "view page info", then on the security tab you can see if it's paypal, see the certificate, etc.. Someone could hijack right-click, it's going to be a bit of effort though. I think in FF shift+rightMouseClick overrides normal right-click to give you the browser menu, but probably that's capturable by the site too.
Ctrl+I is the shortcut, but I don't think it handles frames.
This is why "2FA" is supposed to actually be two factors. If you're using a phone number for 2FA, then authentication still boils down to the same thing: Something you know.
It's still two factors. If someone has only your phone but not your password, they still can't log in. The problem here is that the phone number was also used as a password recovery option, which effectively means you only need the phone to log in. I suspect most gmail users with 2FA are doing this, which defeats the purpose of 2FA. It just becomes "different factor".
It's the password recovery by phone that's the weakness. But I think people getting locked out of their own account is probably a bigger problem for Google than people getting hacked, so they err on the side of saving your from getting locked out.
Interesting, so adding 2FA actually decreased security... Well shit. Interesting case that shows just how unpredictable such things can be.
As far as I understand, though, 2FA increased the attack surface in this case. A web interface itself still remains impenetrable, doesn't it (know your hard-to-guess password and you should be fine)? Mobile provider was the weakest link and any system is as secure as its weakest link.
2. Email randomized password stored in PasswordDatabase
3. PasswordDatabase is stored in CloudDrive
4. CloudDrive randomized password stored in PasswordDatabase
5. CloudDrive with 2FA
6. PasswordDatabase secured by weak password
7. 2FA codes from 2FApp
8. PasswordDatabase, CloudDrive, Email only available together on devices with a human-friendly password. Those 3 and the 2FApp are all on the phone, secured by human-friendly password, on me always.
3 combined with 6 sounds like a recipe for disaster if someone manages to compromise your CloudDrive account (probably not by breaking the password, but by social engineering or a method similar to the one in this article). If they get that, they have your encrypted password database, and if that has a weak password... you're totally SOL. The password database's password is one you want to be /very/ strong.
Personally, I have a password that my password manager generated that I use for it. I had it written down in my wallet for a while, but after typing it multiple times a day for a while I memorized it and since destroyed the paper. It's a shorter password than what I use for my stored passwords, but I think it strikes a good balance. (And it's not a GUID, but if you think you could memorize that then it probably couldn't hurt. That's risky, though -- if you forget, there go all of your passwords for everything!)
I don't know of any off the top of my head, but there was that time a few years ago when Dropbox accidentally let anyone in without a password. This isn't to pick on Dropbox, but security lapses happen and it's wise to have multiple layers of strong defense to reduce your risk. (Also, if someone compromises the email associated with your CloudDrive, they can use that to get your CloudDrive by invoking a password reset.)
EDIT: Wolfram|Alpha estimates the entropy of a password generated using the constraints I used for mine as roughly 85 bits (the relevant space would take 14 trillion years to enumerate). It actually has a pretty information-heavy password strength estimator (though I can't attest to its reliability as I'm not familiar with the internals).
This is precisely why I thought Digits was such a terrible idea (check my comment history, it's there.) SMS is so incredibly insecure that anyone relying on it should not consider themselves security savvy. SMS TFA is lipstick on a pig. Cellphones are so cheap these days, they should all come with a TFA app pre-installed. I'm also not too keen on websites making it so easy to change your username. The story of @N on Twitter comes to mind. Is anyone working on Digits without the SMS part?
It's not just SMS that's the problem though. In many cases any random person off the street can call up your telco, pretend to be you, and get all of your calls and voicemail forwarded to them.
This leaves so many open questions. Foremost: How did they guess his GMail password? Is there a way to access GMail without knowing the password? Ie. by sending a reset password per SMS?
He said his gmail account was basically firstname lastname - given a name someone determined can generally make a fairly good guess as to phone number. Phone books, etc, etc.
This sounds like an argument for adding hardware multi-factor auth in google. It's not a panacea, but a good starting point that can't be easily spoofed or hijacked.
So is the takeaway that we should all disable SMS-based options for receiving 2FA codes, because it weakens your 2FA to the level of your (non-2FA) cell phone account?
I think when I enabled iCloud 2FA it included 2 channels for communication with my phone: one as a named iOS device (where the OS handles receiving and displaying codes), and another as just its phone number. Is that for SMS? Why would they even do that?
You use your backup 2FA codes (which you've stored in a few different locations -- all offline -- including probably your wallet) to get back into your account. From there, you re-seed the 2FA.
Wild.. you know, using google voice for a number of years, I switched to using mvno operators for my cell phone a few years back... Now, I'm glad they don't allow call forwarding on those accounts.
Though it seems like a lot of work, It's hard to imagine going through this... with a similar mindset.
Yet one more reason we shouldn't be letting telcos provide our phone numbers. They are painfully inadequate when it comes to security. And our mobile numbers are now probably the most important identifiers we have, due in no small part to the proliferation of SMS 2FA.
Incredible the lack of barriers in place for adding a forwarding number to a cellphone account. Maybe the attackers got the last 4 of his CC from a hacked set? Or maybe the same for his social. And from there they were able to authenticate with the telco rep
voicemail, then call forwarding, i wonder what is the next. People often ignore many of the service settings and leave them as is (me as well), which potentially creates chances for intruders.
His recovery email address attached to the account must've also been hacked if he had two-factor on, as google always starts the recovery process from that email.
That's my guess based on a pretty easy to figure out WHOIS. But security by obscurity here only goes so far: if someone really wants your phone number, there's lots of ways to get it.
We use computer monitors which customers face from the same angle as us. I'm sure someone thought it would make the retail scenario more inclusive, but security-wise it's a mess. I can't verify account details without pulling up those same details for the customer to see. So I ask people for their details, click the button, and cross my fingers that they're right. If they're wrong, what then? They might legitimately not have known whose name it was under. It might be under their dad, mom, partner or business' name. Doesn't matter, the system has absolutely no design affordances to allow multiple people various levels of security privilege in accessing and altering accounts which are used by more than one person.
Furthermore, we have no organisational clarity about access privileges. Everyone makes up their own standards. Some people in the company are very strict, and won't do a SIM swap without photo ID or full ID over the phone. Some people will do one if the customer quotes the same last name and could be theoretically the account-holder's child. But does it matter when any customer can easily find out name, DOB and address from coming in store, then call up and get the SIM changed over the phone? We do have account PINs but very few people set them. And you could find it out in store if you were sharp-eyed.
There's a constant tension between providing a good customer experience and protecting security and privacy. But our commission is based partly on customer experience feedback scores - and if you're the one asshole who tries to follow all the rules (or follow what you decide should be the rules, because there aren't any haha) then you're gunna get a) bad feedback and b) alienate and make life difficult for the majority of ambiguous security events, which I'm sure are 95-99% trustworthy people.
Anyone relying on two-factor auth with a phone number who uses my company is vulnerable. Simple as that. It would take a determined attacker a day to get control of your number. All you'd notice was that your SIM stopped working. It would all be too late by the time you'd gotten a new one re-activated - and you're still vulnerable.
I'm not sure what telcos are like in other countries but I doubt much better.