If misleading messages ("phishing") are leading their users to enter credentials onto forms which are then used to send out spam, then the solution is not to block access to one of the sites that supports forms. There are an unlimited number of sites that support forms. There are LOTS of better ways to solve this problem. Here are a few:
* Train your users where it is and isn't safe to enter credentials.
* Don't give your users credentials. Have some alternate way to authenticate them like a login token.
* Put rate limiting on the ability of a single account to send out emails.
Blocking the site for just a few hours as an emergency response to a short-term attack is a much more reasonable approach. Sometimes, to react quickly, you need to take measures that are not the best possible choice. But there were better approaches, and the security team should take measures to ensure that they can react more effectively next time. For instance, in this case, a single mass-email or email "virus" had gone out and was tempting a large number of users to give out their credentials. Instead of blocking the site that was collecting the credentials, a better solution would have been to remove the email from the mailboxes of all the students. After all, the emails system is provided by the university, and this cuts off the problem at the root. They should institute the necessary technology to support doing this next time they have a phishing problem... perhaps they can even do this proactively: set up some honeypot accounts not receiving any legitimate emails and automatically destroy any emails matching the signature of emails received by these honeypot accounts (with manual review afterward to correct for false positives).
Im sorry, but that is the typical tech reply that blows normal people's minds. Blame the user. Well, the user says, sod that,
lets just block the problem and get on with what we wanted to do in the first place.
People, normal non tech people, want to use computers as a tool, not become experts in thwarting criminals, etc. If a user cant just go to a computer and simply use it, like say a library or book, then the computer and its champions are failing. Its not the users job to provide security. And no, its not like locking a door. The sheer amount of rubbish poor users have to go through to be safe on a computer is frankly a joke, and the reason so many non geeks love Apple. Yes geeks know Apple are as insecure and any one else, but users believe they are simple and safe.
(At this point, by all means picture a toddler going mental in a shop)
I've been in this business for 30 years, and "train the users" is for me a 30 year mantra that no one out side of geekdom wants to hear. It was my job to enable them to do their job more efficiently, not expect them to become some sort of security expert.
This Uni is doing the simple easy thing to let its users function safely. If the IT world doens't like it, then 1: tough, 2: damn well fix it, and 3: stop blaming users.
Then, you tell them to limit emails. "Oh right" says the user, "I thought one point of email was easy mass mailing, and now you want to bloke it?"
Really think about the user. Its they who make computers and the internet worth bothering with.
I'd like to use my car like a tool. Why do manufacturers make them so difficult to safely operate, I shouldn't require any additional training to operate it, I should be able to just hop in at location A and hop out at location B.
Regardless of what some folks in the "User Friendly" movement would like to think, most tools require basic instruction in order to be safely used. We can't code away all individual responsibility.
Spotting a phishing form only seems like "basic instruction" to you because you're highly computer-literate. It's not; it involves understanding at least some of DNS and the difference between hosts, domains and TLDs, URLs, HTTPS, and not to mention certificates and their validity.
In your analogy, it's like saying "people shouldn't be allowed to use cars unless they can verify the hydraulic pressure in the master brake cylinder"
Which is wrong: manufacturers should (and did) install brakes warning lights. And we need to come up with better warnings for users. Blaming them for these sorts of problems is unacceptable.
1) Did you click a link from an email?
2) Does the page it redirect you to ask for your login info?
You may have received a phishing email. Are either true?
1) You expected this email because you were notified about it from another source e.g. website, support staff.
2) If you login to the website not via the suspicious link, the linked web page does not ask for your login.
If you answered yes, you probably don't have a phishing email.
"Login to the website not via the suspicious link" requires understanding what URLs are, how to isolate which part is "the website", how to edit them and how to enter them. The amount of people Googling for "log into Facebook" proves none of this is a given.
"You expected this email" is also not a hard test to pass in either academia or corporate settings, where users are generally besieged by unsolicted instructions to "Go here, do this, hurry up about it".
Not huge blame, but browser makers are making it harder to understand what's going on what how to use the web - obfuscating the URL - taking off parts of it, sometimes hiding the entire URL bar altogether.
Similarly, 'cookies' are 'scary' - there's no visual indication in a browser of what's going on with cookies, what they are, what they hold - you have to dig deep in 'preferences' then 'advanced' or 'security'. Instead of easier to use tools, we get legislation around cookies. WTF?
Users don't think like that. They generally don't know what redirect means, let alone recognise when it happens. I'll add that more and more attacks seem to come from trusted sources recently. This only goes to further the issue.
#2 - Many people don't know what a redirect is. Many of them don't really know the difference between email and www. Some of them won't know there is a difference; it's all just clicky things.
Here are some regular people's experiences of scams.
I love that you put "user friendly" in quotes. As if these products were for somebody other than the users.
Computers have taken over the world precisely because we have worked very hard to make them approachable by mere mortals. The only reason Google matters is that they figured out how to make the search engine much more user friendly. Apple is on top of the world because they made a more user-friendly music player, phone, and portable computer. And we HNers are all getting paid stupid amounts of money because the wide adoption of computing has created high demand.
Nobody is talking about coding away individual responsibility. They're talking about removing another bit of pointless friction from the system, so that the tools are more effective for the tool-users.
And I'll add that cars are heading in exactly the direction that you lampoon. If the car industry had thought like you, they'd still be back on hand-cranking to start the car, needing to maintain the battery's water level, and having to wear goggles. And soon Google will have solved the driving problem, mainly thanks to the way consumer adoption has driven down the costs of computing.
Great example of why Google's driver-less cars will be so successful, overall safer and why we should engineer our systems to remove as much complexity as possible for the users.
I agree with your POV and the decisions we have to make as a result. That said, users should be expected to learn computers if they want to use them. If not, they shouldn't be allowed to use them. Same policy I'd have with a buzz-saw in a shop.
So if you fall for email phishing attacks despite training, then you shouldn't be trusted with mass email rights. Likewise, the admins have an obligation to control those resources, and to train users. (If that's too hard to expect, then we need to find out why.)
Point is, users should get the blame for their fuckups-- when they fuck up. It's not an all-or-nothing thing.
I disagree. Unless the user is intentionally TRYING to break the system, it is probably not the user's fault. It is IT's fault for failing to make it easy for the user to understand.
For instance, how about if the login page says in big bold letters: "This is the ONLY page you should ever enter your password on." With this tiny change, moderately competent users are much better protected from phishing attempts that use something like Google Docs forms... although that still hasn't protected them from something like a hand-crafted phishing site. Other techniques can help with this: for example, you could offer a bounty: pay real dollars for the first person to report any phishing site resembling your login page.
Some steps are up to the user, but instead of BLAMING the user, make it EASY for the user.
> Really think about the user. Its they who make computers and the internet worth bothering with.
Were the IT dept. folks thinking about the user, they would never have blocked Google Docs in the first place. People want to use tool X, so the job of university IT is to ensure they are able to use tool X. They did exactly the opposite.
Also, solutions proposed by GP are reasonable ways to reduce / mitigate the risk of phishing without inconveniencing users too much.
Most bureaucratic IT departments (i.e. big corps, govs, schools) tend to be more about the reduction of work for the IT department and less about the best solutions for the users.
Since IT departments are generally regarded as cost centres & therefore they are usually either understaffed or expected to keep costs to the absolute minimum by upper management this is hardly surprising.
In this particular case, the department is in a double bind: the success of phishing emails threatens the ability of the university to send email to many other major hosts on the net. If you sat the users down and asked them which they need more, a reliable email to people outside the university or access to Google Docs, then the decision isn't so clear cut all of a sudden is it?
> If you sat the users down and asked them which they need more, a reliable email to people outside the university or access to Google Docs, then the decision isn't so clear cut all of a sudden is it?
But it's a false dichotomy: there are solutions that preserve both.
How is dumbing down the world better than teaching best practices to people who will shape the future?
20 years earlier, wouldn't you have expected an office secretary to not send postal mail to everyone in the city?
Or give the cabinet keys with the petty cash to anyone who dressed like his/her boss?
> "train the users" is for me a 30 year mantra that no one out side of geekdom wants to hear
Perhaps a better approach to "training the users" might be for the University to actively attempt to phish its own users on a regular basis.
Those who fall for the phishing could be contacted directly, or have email access limited for some period of time (for example, a reduced sending rate limit).
Making self-phishing a regular occurrence (say, weekly) would train users to recognise and ignore it.
I worked at a financial place that had people randomly roam around looking for 'open' desktops - people who'd left their computer unlocked. The random people would open MS Word and leave a big message on their screen that they'd been 'caught'. It was lighthearted but made the point, and people who routinely left their systems open were eventually dealt with more harshly.
My 'idea of the month' was to turn down the auto-lock time from 10 minutes down to 1 or 2. Wouldn't eliminate it, but generally, people at their desk were using their computer anyway, so it wasn't a big deal, and if you were called away and forgot, the auto-lock would kick in pretty quickly.
> that is the typical tech reply that blows normal people's minds. Blame the user.
My bad... I never intended to suggest "blame the user".
> If a user cant just go to a computer and simply use it, like say a library or book, then the computer and its champions are failing. Its not the users job to provide security.
If I ran a library and I found that my visitors were just passing the same library card around to everyone in line, even strangers, instead of having each person get their own card, then I would say we needed some user education. We wouldn't need to issue special biometric IDs with a 22-step process to check out a book... but we would need to tell people "Hey, get your own card!"
Similarly, if I find that my IT system users are entering their login passwords in ANYTHING other than the login box (particularly online forms), then I have failed them -- I have failed to educate them about basic use of the systems. I should correct that, by coming to them and letting them know that I will NEVER ask for their password in ANY place other than the login form, and that they shouldn't enter it anywhere else.
> Then, you tell them to limit emails. "Oh right" says the user, "I thought one point of email was easy mass mailing, and now you want to bloke it?"
Actually, I wouldn't do it that way. I would set reasonable quotas (say, 100 outgoing emails before our rate limiting kicks in). After that, I would have it slow the rate of email sending, not block it. And if any user had sent enough that their mails were getting delayed, I'd also trigger a message to them inviting them to contact IT if they had special needs for mass emails. (We could change their quota, either temporarily or permanently, depending on what they were trying to accomplish.)
We can't eliminate all possible danger and coat the world in foam rubber because someone people are accident prone.
That's not user-driven logic, that's bureaucrat logic, and pushing for education as a way to mitigate the dangers isn't some kind of 'geek logic' that mainstream people couldn't be bothered with, it's very simple logic.
There are infinite threats. We all have our own whitelists and blacklists.
I don't want to live in a world without electricity, cars, swimming pools, stairs, and knives because some people may hurt themselves, because I may even hurt myself.
Potentially getting fucked over should be in the TOS for human life. Click here to agree or sit in the corner and make collages with non-toxic glue, magazines full of approved harmless imagery, and safety scissors.
yes, normal people don't care about the tech details and don't want/need to be experts to handle technology. but it is responsibility of the tech experts to make that possibility.
that's the hardest nut and that's the income comes. blocking some sources is simply easy. " We found Google Docs brings more and more phishing, OK let's just block it and sorry and apologize to our users." " We found again zoho brings more and more phishing, Ok let's block it too." Well, we will find more like this situation.
Oh, it's scary! Let's just block the internet and turn on our TV...
I guess you can - I see what you mean. But when looking for an answer I found that you're not training children here. You're preaching to grown-ups who are not that^H^H^H^H trainable.
15 years ago, when computers were less ubiquitous than today, I worked with near-retirement-age users (many 55 or 60 yrs old) who had NEVER used a computer (I had to start by teaching how to move a mouse). And they were perfectly trainable as long as you didn't start with an attitude that they were dumb for not already knowing this stuff.
My father's pleasure is to spend half an hour a day on Windows Solitaire. He moves the cards faster than I can see what the cards are... It took him probably a week to accommodate with the mouse and now he is faster than anyone I know. My guess is that if he knew a little bit of English he would have entered "that Internet of yours" with no difficulties. He is technical literate mind you - he build in the early 90s a Spectrum clone.
On the other hand I see plenty of 25+ people that don't care too much to change their status quo. I might as well be one of them and pretend I'm not.
> Train your users where it is and isn't safe to enter credentials.
This demonstrably doesn't work. It reduces but cannot eliminate all instances of phishing.
> Don't give your users credentials. Have some alternate way to authenticate them like a login token.
Better, but scrounging up a few million pounds for dongles, plus the non-stop cost and effort of replacing lost and stolen dongles, is not easy for a university, no matter how famous.
> Put rate limiting on the ability of a single account to send out emails.
Many users have legitimate reasons to send out mass emails.
> Instead of blocking the site that was collecting the credentials, a better solution would have been to remove the email from the mailboxes of all the students.
Phishing emails are often varied into multiple templates to avoid being scrubbed this way.
They also tend to trickle in at random, rather than turning up all at once.
Here's my suggestion: Rate limit the emails at a very low number, and require higher privileges for sending mass emails which must be granted on a per-mailout basis. Users that know they're going to send out a high volume would get an access token from IT (the process for doing so would have to strike a balance of convenience and security).
Does two-factor auth have to be that expensive to implement these days? I've experimented with building it against Google Authenticator (free, runs on any modern smart phone) and it's ridiculously easy to get up and running - it's a few lines of Python https://github.com/tadeck/onetimepass/blob/master/onetimepas...
Doesn't solve the problem of users without smart phones though, which I imagine is still not ignorable at most universities.
Yes, google-authenticator prints out a list of unlock codes when you run it. You can add extra ones to the file it puts in your home directory whenever you want.
If you place rate limiting on email accounts by default and then for the lower percent of users that need a higher rate do it on a case-by-case basis. In my experience most users that fall victim to these types of phishing attacks do not need to send high volumes of emails.
Oxford already has rate limiting. 1000 messages per hour through their servers, it seems [0].
The next step would be to filter outbound traffic to block SMTP from compromised PCs. It seems they have an outbound firewall, but it's not obvious which ports are closed because the list of blocked ports is ... blocked[1].
The 1000 limit seems like a high number, why would a legit user need to send that much email out? I'd think a much smaller number like 5 per hour would be better.
You're a lecturer with 200 people in your class. That's 2 days. Does the lecturer have to leave their computer all the time? Is their email programme going to handle this sort of delay? What do you do if the lecturer wants to send an email about updated homework due in a few days? Some students will have a 2 day head start, is that fair? Do you have to give them extra time/marks?
You're the first year faculty advisor. There are 1,000 people in that year. That's 1 week. Same questions as above.
(And in case you think "Well let the lecturers send more", what makes you think the lecturers aren't the problem in the first place?)
Calls for Papers / Articles is the classic reason. The IEEE and the ACM might have proper mailing lists for that sort of business ... most academic fields do not.
Edit: I think they could lower the rate more and push mailing lists, but on the other hand a lot of users simply wouldn't notice that they're rate limited. Which could lead to entirely different brand of lulz.
1000 mails/h is definitely too much - it's 16 mails per minute(!). I think something around 60 - 100 mails per hour is more reasonable, to cater for cases like lots of one-liner exchanges.
Just use more than one outbound mail server. All "normal" mail goes through a server that's rate-limited heavily -- a few dozen an hour, at the most. Bulk email has to be sent through a separate outbound mail server, and there can be much more scrutiny on what goes through that -- because the legitimate "mass mailings" are going to be comparatively rare, and are probably worth having someone take a look at them, to make sure they're OK.
Better, but scrounging up a few million pounds for dongles, plus the non-stop cost and effort of replacing lost and stolen dongles, is not easy for a university, no matter how famous.
Additionally, what OSes will these dongles support? Would you rather "Oxford University bans Windows XP"? or "Oxford University bans iPhones"? etc.
Dongles is probably a misnomer here. While dongle probably means something you plug in to authenticate, many two factor auth 'dongles' don't plug in at all. They have a LCD screen that shows a one time use token (6+ digit number) that you enter at the same time as your password.
Verisign "dongles" can come on smartphones of all types, and on many operating systems. I even believe they have browser plugins, meaning even linux would be supported. That is, if you think two-factor is necessary for university systems.
As far as email, there are several things to consider: One, that I would think it a rarity for a student, or even a teacher! to need to send a single email to more than a handful of /external/ email addresses at a time. Put an email firewall in place between your internal and external systems, and have IT security monitor that system for peaks in traffic. Single users sending outbound mail a lot. Obviously, there should be a spam filter going in AND out.
And yes, spam email does trickle in sometimes, and from different SMTP servers, but from the bit I've dealt with them, there are definite patterns that a person can pick up on when they're watching for it.
> * Train your users where it is and isn't safe to enter credentials.
> * Don't give your users credentials. Have some alternate way to authenticate them like a login token.
I manage barely a 100 users and I have talked to each of them personally. They're good people and can comprehend instructions. But they still fall for these every now and then. Training doesn't help. They are fantastic in their respective fields but to them, all prompt boxes and all login screens have the same exact amount of legitimacy. Just like how every spark plug looks the same to me. Training can help some users but most of them are going to fall for it eventually.
It is 2013, two factor authentication is here and it is open source software. You can use Google Authenticator[1] for free or you can use something like the YubiKey[2]. If the students have a smartphone then Google Authenticator is on almost all of the major platforms.
Right, but your article is talking about outlook.com.
Office 365 and stand alone (ie: private) exchange installs certainly support it.
Comparing outlook.com to 'Microsoft Exchange' (whether you're talking about Office 365 which is MS's 'cloud' solution, or private Exchange servers) is not exactly fair. One is designed as a free email hosting solution for personal use (essentially replacing Hotmail), the other is designed for business/organizational use and costs money.
Train your users where it is and isn't safe to enter credentials.
How do you actually suggest this be done? Seriously? Classes? OK what's the estimated success rate of that class (i.e. how many people will go to the class, then ignore all the advice)? 80%? What do you do about the 20% who haven't been 'trained'? What next? Computer licences? How long will that take, and again, how many people will ignore it?
If misleading messages ("phishing") are leading their users to enter credentials onto forms which are then used to send out spam, then the solution is not to block access to one of the sites that supports forms. There are an unlimited number of sites that support forms. There are LOTS of better ways to solve this problem. Here are a few:
* Train your users where it is and isn't safe to enter credentials.
* Don't give your users credentials. Have some alternate way to authenticate them like a login token.
* Put rate limiting on the ability of a single account to send out emails.
Blocking the site for just a few hours as an emergency response to a short-term attack is a much more reasonable approach. Sometimes, to react quickly, you need to take measures that are not the best possible choice. But there were better approaches, and the security team should take measures to ensure that they can react more effectively next time. For instance, in this case, a single mass-email or email "virus" had gone out and was tempting a large number of users to give out their credentials. Instead of blocking the site that was collecting the credentials, a better solution would have been to remove the email from the mailboxes of all the students. After all, the emails system is provided by the university, and this cuts off the problem at the root. They should institute the necessary technology to support doing this next time they have a phishing problem... perhaps they can even do this proactively: set up some honeypot accounts not receiving any legitimate emails and automatically destroy any emails matching the signature of emails received by these honeypot accounts (with manual review afterward to correct for false positives).