If misleading messages ("phishing") are leading their users to enter credentials onto forms which are then used to send out spam, then the solution is not to block access to one of the sites that supports forms. There are an unlimited number of sites that support forms. There are LOTS of better ways to solve this problem. Here are a few:
* Train your users where it is and isn't safe to enter credentials.
* Don't give your users credentials. Have some alternate way to authenticate them like a login token.
* Put rate limiting on the ability of a single account to send out emails.
Blocking the site for just a few hours as an emergency response to a short-term attack is a much more reasonable approach. Sometimes, to react quickly, you need to take measures that are not the best possible choice. But there were better approaches, and the security team should take measures to ensure that they can react more effectively next time. For instance, in this case, a single mass-email or email "virus" had gone out and was tempting a large number of users to give out their credentials. Instead of blocking the site that was collecting the credentials, a better solution would have been to remove the email from the mailboxes of all the students. After all, the emails system is provided by the university, and this cuts off the problem at the root. They should institute the necessary technology to support doing this next time they have a phishing problem... perhaps they can even do this proactively: set up some honeypot accounts not receiving any legitimate emails and automatically destroy any emails matching the signature of emails received by these honeypot accounts (with manual review afterward to correct for false positives).
Im sorry, but that is the typical tech reply that blows normal people's minds. Blame the user. Well, the user says, sod that,
lets just block the problem and get on with what we wanted to do in the first place.
People, normal non tech people, want to use computers as a tool, not become experts in thwarting criminals, etc. If a user cant just go to a computer and simply use it, like say a library or book, then the computer and its champions are failing. Its not the users job to provide security. And no, its not like locking a door. The sheer amount of rubbish poor users have to go through to be safe on a computer is frankly a joke, and the reason so many non geeks love Apple. Yes geeks know Apple are as insecure and any one else, but users believe they are simple and safe.
(At this point, by all means picture a toddler going mental in a shop)
I've been in this business for 30 years, and "train the users" is for me a 30 year mantra that no one out side of geekdom wants to hear. It was my job to enable them to do their job more efficiently, not expect them to become some sort of security expert.
This Uni is doing the simple easy thing to let its users function safely. If the IT world doens't like it, then 1: tough, 2: damn well fix it, and 3: stop blaming users.
Then, you tell them to limit emails. "Oh right" says the user, "I thought one point of email was easy mass mailing, and now you want to bloke it?"
Really think about the user. Its they who make computers and the internet worth bothering with.
I'd like to use my car like a tool. Why do manufacturers make them so difficult to safely operate, I shouldn't require any additional training to operate it, I should be able to just hop in at location A and hop out at location B.
Regardless of what some folks in the "User Friendly" movement would like to think, most tools require basic instruction in order to be safely used. We can't code away all individual responsibility.
Spotting a phishing form only seems like "basic instruction" to you because you're highly computer-literate. It's not; it involves understanding at least some of DNS and the difference between hosts, domains and TLDs, URLs, HTTPS, and not to mention certificates and their validity.
In your analogy, it's like saying "people shouldn't be allowed to use cars unless they can verify the hydraulic pressure in the master brake cylinder"
Which is wrong: manufacturers should (and did) install brakes warning lights. And we need to come up with better warnings for users. Blaming them for these sorts of problems is unacceptable.
1) Did you click a link from an email?
2) Does the page it redirect you to ask for your login info?
You may have received a phishing email. Are either true?
1) You expected this email because you were notified about it from another source e.g. website, support staff.
2) If you login to the website not via the suspicious link, the linked web page does not ask for your login.
If you answered yes, you probably don't have a phishing email.
"Login to the website not via the suspicious link" requires understanding what URLs are, how to isolate which part is "the website", how to edit them and how to enter them. The amount of people Googling for "log into Facebook" proves none of this is a given.
"You expected this email" is also not a hard test to pass in either academia or corporate settings, where users are generally besieged by unsolicted instructions to "Go here, do this, hurry up about it".
Not huge blame, but browser makers are making it harder to understand what's going on what how to use the web - obfuscating the URL - taking off parts of it, sometimes hiding the entire URL bar altogether.
Similarly, 'cookies' are 'scary' - there's no visual indication in a browser of what's going on with cookies, what they are, what they hold - you have to dig deep in 'preferences' then 'advanced' or 'security'. Instead of easier to use tools, we get legislation around cookies. WTF?
Users don't think like that. They generally don't know what redirect means, let alone recognise when it happens. I'll add that more and more attacks seem to come from trusted sources recently. This only goes to further the issue.
#2 - Many people don't know what a redirect is. Many of them don't really know the difference between email and www. Some of them won't know there is a difference; it's all just clicky things.
Here are some regular people's experiences of scams.
I love that you put "user friendly" in quotes. As if these products were for somebody other than the users.
Computers have taken over the world precisely because we have worked very hard to make them approachable by mere mortals. The only reason Google matters is that they figured out how to make the search engine much more user friendly. Apple is on top of the world because they made a more user-friendly music player, phone, and portable computer. And we HNers are all getting paid stupid amounts of money because the wide adoption of computing has created high demand.
Nobody is talking about coding away individual responsibility. They're talking about removing another bit of pointless friction from the system, so that the tools are more effective for the tool-users.
And I'll add that cars are heading in exactly the direction that you lampoon. If the car industry had thought like you, they'd still be back on hand-cranking to start the car, needing to maintain the battery's water level, and having to wear goggles. And soon Google will have solved the driving problem, mainly thanks to the way consumer adoption has driven down the costs of computing.
Great example of why Google's driver-less cars will be so successful, overall safer and why we should engineer our systems to remove as much complexity as possible for the users.
I agree with your POV and the decisions we have to make as a result. That said, users should be expected to learn computers if they want to use them. If not, they shouldn't be allowed to use them. Same policy I'd have with a buzz-saw in a shop.
So if you fall for email phishing attacks despite training, then you shouldn't be trusted with mass email rights. Likewise, the admins have an obligation to control those resources, and to train users. (If that's too hard to expect, then we need to find out why.)
Point is, users should get the blame for their fuckups-- when they fuck up. It's not an all-or-nothing thing.
I disagree. Unless the user is intentionally TRYING to break the system, it is probably not the user's fault. It is IT's fault for failing to make it easy for the user to understand.
For instance, how about if the login page says in big bold letters: "This is the ONLY page you should ever enter your password on." With this tiny change, moderately competent users are much better protected from phishing attempts that use something like Google Docs forms... although that still hasn't protected them from something like a hand-crafted phishing site. Other techniques can help with this: for example, you could offer a bounty: pay real dollars for the first person to report any phishing site resembling your login page.
Some steps are up to the user, but instead of BLAMING the user, make it EASY for the user.
> Really think about the user. Its they who make computers and the internet worth bothering with.
Were the IT dept. folks thinking about the user, they would never have blocked Google Docs in the first place. People want to use tool X, so the job of university IT is to ensure they are able to use tool X. They did exactly the opposite.
Also, solutions proposed by GP are reasonable ways to reduce / mitigate the risk of phishing without inconveniencing users too much.
Most bureaucratic IT departments (i.e. big corps, govs, schools) tend to be more about the reduction of work for the IT department and less about the best solutions for the users.
Since IT departments are generally regarded as cost centres & therefore they are usually either understaffed or expected to keep costs to the absolute minimum by upper management this is hardly surprising.
In this particular case, the department is in a double bind: the success of phishing emails threatens the ability of the university to send email to many other major hosts on the net. If you sat the users down and asked them which they need more, a reliable email to people outside the university or access to Google Docs, then the decision isn't so clear cut all of a sudden is it?
> If you sat the users down and asked them which they need more, a reliable email to people outside the university or access to Google Docs, then the decision isn't so clear cut all of a sudden is it?
But it's a false dichotomy: there are solutions that preserve both.
How is dumbing down the world better than teaching best practices to people who will shape the future?
20 years earlier, wouldn't you have expected an office secretary to not send postal mail to everyone in the city?
Or give the cabinet keys with the petty cash to anyone who dressed like his/her boss?
> "train the users" is for me a 30 year mantra that no one out side of geekdom wants to hear
Perhaps a better approach to "training the users" might be for the University to actively attempt to phish its own users on a regular basis.
Those who fall for the phishing could be contacted directly, or have email access limited for some period of time (for example, a reduced sending rate limit).
Making self-phishing a regular occurrence (say, weekly) would train users to recognise and ignore it.
I worked at a financial place that had people randomly roam around looking for 'open' desktops - people who'd left their computer unlocked. The random people would open MS Word and leave a big message on their screen that they'd been 'caught'. It was lighthearted but made the point, and people who routinely left their systems open were eventually dealt with more harshly.
My 'idea of the month' was to turn down the auto-lock time from 10 minutes down to 1 or 2. Wouldn't eliminate it, but generally, people at their desk were using their computer anyway, so it wasn't a big deal, and if you were called away and forgot, the auto-lock would kick in pretty quickly.
> that is the typical tech reply that blows normal people's minds. Blame the user.
My bad... I never intended to suggest "blame the user".
> If a user cant just go to a computer and simply use it, like say a library or book, then the computer and its champions are failing. Its not the users job to provide security.
If I ran a library and I found that my visitors were just passing the same library card around to everyone in line, even strangers, instead of having each person get their own card, then I would say we needed some user education. We wouldn't need to issue special biometric IDs with a 22-step process to check out a book... but we would need to tell people "Hey, get your own card!"
Similarly, if I find that my IT system users are entering their login passwords in ANYTHING other than the login box (particularly online forms), then I have failed them -- I have failed to educate them about basic use of the systems. I should correct that, by coming to them and letting them know that I will NEVER ask for their password in ANY place other than the login form, and that they shouldn't enter it anywhere else.
> Then, you tell them to limit emails. "Oh right" says the user, "I thought one point of email was easy mass mailing, and now you want to bloke it?"
Actually, I wouldn't do it that way. I would set reasonable quotas (say, 100 outgoing emails before our rate limiting kicks in). After that, I would have it slow the rate of email sending, not block it. And if any user had sent enough that their mails were getting delayed, I'd also trigger a message to them inviting them to contact IT if they had special needs for mass emails. (We could change their quota, either temporarily or permanently, depending on what they were trying to accomplish.)
We can't eliminate all possible danger and coat the world in foam rubber because someone people are accident prone.
That's not user-driven logic, that's bureaucrat logic, and pushing for education as a way to mitigate the dangers isn't some kind of 'geek logic' that mainstream people couldn't be bothered with, it's very simple logic.
There are infinite threats. We all have our own whitelists and blacklists.
I don't want to live in a world without electricity, cars, swimming pools, stairs, and knives because some people may hurt themselves, because I may even hurt myself.
Potentially getting fucked over should be in the TOS for human life. Click here to agree or sit in the corner and make collages with non-toxic glue, magazines full of approved harmless imagery, and safety scissors.
yes, normal people don't care about the tech details and don't want/need to be experts to handle technology. but it is responsibility of the tech experts to make that possibility.
that's the hardest nut and that's the income comes. blocking some sources is simply easy. " We found Google Docs brings more and more phishing, OK let's just block it and sorry and apologize to our users." " We found again zoho brings more and more phishing, Ok let's block it too." Well, we will find more like this situation.
Oh, it's scary! Let's just block the internet and turn on our TV...
I guess you can - I see what you mean. But when looking for an answer I found that you're not training children here. You're preaching to grown-ups who are not that^H^H^H^H trainable.
15 years ago, when computers were less ubiquitous than today, I worked with near-retirement-age users (many 55 or 60 yrs old) who had NEVER used a computer (I had to start by teaching how to move a mouse). And they were perfectly trainable as long as you didn't start with an attitude that they were dumb for not already knowing this stuff.
My father's pleasure is to spend half an hour a day on Windows Solitaire. He moves the cards faster than I can see what the cards are... It took him probably a week to accommodate with the mouse and now he is faster than anyone I know. My guess is that if he knew a little bit of English he would have entered "that Internet of yours" with no difficulties. He is technical literate mind you - he build in the early 90s a Spectrum clone.
On the other hand I see plenty of 25+ people that don't care too much to change their status quo. I might as well be one of them and pretend I'm not.
> Train your users where it is and isn't safe to enter credentials.
This demonstrably doesn't work. It reduces but cannot eliminate all instances of phishing.
> Don't give your users credentials. Have some alternate way to authenticate them like a login token.
Better, but scrounging up a few million pounds for dongles, plus the non-stop cost and effort of replacing lost and stolen dongles, is not easy for a university, no matter how famous.
> Put rate limiting on the ability of a single account to send out emails.
Many users have legitimate reasons to send out mass emails.
> Instead of blocking the site that was collecting the credentials, a better solution would have been to remove the email from the mailboxes of all the students.
Phishing emails are often varied into multiple templates to avoid being scrubbed this way.
They also tend to trickle in at random, rather than turning up all at once.
Here's my suggestion: Rate limit the emails at a very low number, and require higher privileges for sending mass emails which must be granted on a per-mailout basis. Users that know they're going to send out a high volume would get an access token from IT (the process for doing so would have to strike a balance of convenience and security).
Does two-factor auth have to be that expensive to implement these days? I've experimented with building it against Google Authenticator (free, runs on any modern smart phone) and it's ridiculously easy to get up and running - it's a few lines of Python https://github.com/tadeck/onetimepass/blob/master/onetimepas...
Doesn't solve the problem of users without smart phones though, which I imagine is still not ignorable at most universities.
Yes, google-authenticator prints out a list of unlock codes when you run it. You can add extra ones to the file it puts in your home directory whenever you want.
If you place rate limiting on email accounts by default and then for the lower percent of users that need a higher rate do it on a case-by-case basis. In my experience most users that fall victim to these types of phishing attacks do not need to send high volumes of emails.
Oxford already has rate limiting. 1000 messages per hour through their servers, it seems [0].
The next step would be to filter outbound traffic to block SMTP from compromised PCs. It seems they have an outbound firewall, but it's not obvious which ports are closed because the list of blocked ports is ... blocked[1].
The 1000 limit seems like a high number, why would a legit user need to send that much email out? I'd think a much smaller number like 5 per hour would be better.
You're a lecturer with 200 people in your class. That's 2 days. Does the lecturer have to leave their computer all the time? Is their email programme going to handle this sort of delay? What do you do if the lecturer wants to send an email about updated homework due in a few days? Some students will have a 2 day head start, is that fair? Do you have to give them extra time/marks?
You're the first year faculty advisor. There are 1,000 people in that year. That's 1 week. Same questions as above.
(And in case you think "Well let the lecturers send more", what makes you think the lecturers aren't the problem in the first place?)
Calls for Papers / Articles is the classic reason. The IEEE and the ACM might have proper mailing lists for that sort of business ... most academic fields do not.
Edit: I think they could lower the rate more and push mailing lists, but on the other hand a lot of users simply wouldn't notice that they're rate limited. Which could lead to entirely different brand of lulz.
1000 mails/h is definitely too much - it's 16 mails per minute(!). I think something around 60 - 100 mails per hour is more reasonable, to cater for cases like lots of one-liner exchanges.
Just use more than one outbound mail server. All "normal" mail goes through a server that's rate-limited heavily -- a few dozen an hour, at the most. Bulk email has to be sent through a separate outbound mail server, and there can be much more scrutiny on what goes through that -- because the legitimate "mass mailings" are going to be comparatively rare, and are probably worth having someone take a look at them, to make sure they're OK.
Better, but scrounging up a few million pounds for dongles, plus the non-stop cost and effort of replacing lost and stolen dongles, is not easy for a university, no matter how famous.
Additionally, what OSes will these dongles support? Would you rather "Oxford University bans Windows XP"? or "Oxford University bans iPhones"? etc.
Dongles is probably a misnomer here. While dongle probably means something you plug in to authenticate, many two factor auth 'dongles' don't plug in at all. They have a LCD screen that shows a one time use token (6+ digit number) that you enter at the same time as your password.
Verisign "dongles" can come on smartphones of all types, and on many operating systems. I even believe they have browser plugins, meaning even linux would be supported. That is, if you think two-factor is necessary for university systems.
As far as email, there are several things to consider: One, that I would think it a rarity for a student, or even a teacher! to need to send a single email to more than a handful of /external/ email addresses at a time. Put an email firewall in place between your internal and external systems, and have IT security monitor that system for peaks in traffic. Single users sending outbound mail a lot. Obviously, there should be a spam filter going in AND out.
And yes, spam email does trickle in sometimes, and from different SMTP servers, but from the bit I've dealt with them, there are definite patterns that a person can pick up on when they're watching for it.
> * Train your users where it is and isn't safe to enter credentials.
> * Don't give your users credentials. Have some alternate way to authenticate them like a login token.
I manage barely a 100 users and I have talked to each of them personally. They're good people and can comprehend instructions. But they still fall for these every now and then. Training doesn't help. They are fantastic in their respective fields but to them, all prompt boxes and all login screens have the same exact amount of legitimacy. Just like how every spark plug looks the same to me. Training can help some users but most of them are going to fall for it eventually.
It is 2013, two factor authentication is here and it is open source software. You can use Google Authenticator[1] for free or you can use something like the YubiKey[2]. If the students have a smartphone then Google Authenticator is on almost all of the major platforms.
Right, but your article is talking about outlook.com.
Office 365 and stand alone (ie: private) exchange installs certainly support it.
Comparing outlook.com to 'Microsoft Exchange' (whether you're talking about Office 365 which is MS's 'cloud' solution, or private Exchange servers) is not exactly fair. One is designed as a free email hosting solution for personal use (essentially replacing Hotmail), the other is designed for business/organizational use and costs money.
Train your users where it is and isn't safe to enter credentials.
How do you actually suggest this be done? Seriously? Classes? OK what's the estimated success rate of that class (i.e. how many people will go to the class, then ignore all the advice)? 80%? What do you do about the 20% who haven't been 'trained'? What next? Computer licences? How long will that take, and again, how many people will ignore it?
It's the perfect example of why security teams are often considered to be the least friendly, least approachable part of an already unapproachable department (IT).
Their reasoning seems to be "Google Docs causes us (the security team) hassle, we don't use Google Docs, so we'll shut it down".
They might as well of shut down the whole of the Internet, for all their nonsensical reasoning, except they'd of been affected themselves then..
No, their reasoning is that the continuous phishing attacks caused unacceptable trouble with their email system (e.g., Hotmail dropping all emails coming from Oxford). Due to extensive international collaborations, keeping a universities email system running is probably one of the most important tasks of the IT team. Google Docs is nice and useful, but nowhere near as important. Given that they, practically speaking, had no alternative way of dealing with the phishing attacks effectively, they made the right choice in temporarily suspending Google Docs access.
"no alternative way of dealing with the phishing attacks effectively"
How about not using passwords? All students, staff, and faculty should have ID cards; start issuing smartcards, and start using cryptographic techniques to authenticate users. Also, digitally sign all official mail, and instruct the users to check those signatures.
These are not insurmountable problems. The real issue is that the IT team is not willing to push for a real solution, and instead went for a bandaid on a broken leg.
Your solutions do not take into account the main problem with the security department: budget. There is a huge budgetary crisis in ALL european universities at this moment, including Oxford and Cambridge.
I bet if they ask for the resources to implement all those solutions, they will be told: find something at zero cost, I repeat zero-cost. Roger that?
Not that I agree blocking google docs is reasonable, just pointing out the problems with your suggestions.
>Your solutions do not take into account the main problem with the security department: budget. There is a huge budgetary crisis in ALL european universities at this moment, including Oxford and Cambridge.
> How about not using passwords? All students, staff, and faculty should have ID cards; start issuing smartcards, and start using cryptographic techniques to authenticate users.
Costs. At my university (though of course slightly smaller than Oxford) that would never work.
> Also, digitally sign all official mail, and instruct the users to check those signatures.
Have you met users? That's as good as saying they shouldn't be idiots and never enter their credentials in a site linked in a mail. If that would work all anti virus vendors could close shop.
I also wonder why so many phishing emails are getting through the university spam filters - a slightly better solution might of been to remove links in external emails that point to docs.google.com.
But anyway, I don't want to start slagging off a particular team that I've never met - maybe they wanted to do all sorts of other, smarter, things and weren't allowed, and maybe they'll be allowed to do them now..
I can believe it, I just don't know why it's not been customised to react to links to docs.google.com if it's such a high volume issue.
It's not a trivial problem by any means, but from the network security team's blog it doesn't seem like they've taken many of the steps that I'd expect prior to cutting off a very high traffic website.
There's the nice clever intelligent solution which could be developed over a few weeks, or there's the fact that the phishers have decided -- for whatever reason -- to go apeshit today.
True, but in this case it seems like it's not a particularly new problem, just something that they've finally reacted to?
They actually mention sinkholing spreadsheets.google.com in this post from August 2011 [1], they actually say "There are also some forms which are more difficult to block ( I don’t think we’d be too popular if we sink-holed spreadsheets.google.com for example)".
Their email client can do it automatically. Basically, you just need to tell them, "Official emails will always have a big, green border around them."
Also, the number of people who fall for 419 scams is fairly low, just barely above the threshold of profitability. The reason people are shocked when they hear that anyone falls for such scams is that hardly anyone does. There is a hypothesis that 419 scams are designed to be obvious, because it helps in filtering potential victims: anyone who would be naive enough to reply is an easy target.
I think a broader problem is that most people are not just unaware of cryptography, but they use an email client that has no support for checking digital signatures. Webmail is by far the most popular email client type, but many popular webmail systems have no support for digital signatures at all, not even checking them for validity. It would be a lot easier to tell people to check for a digital signature if that meant looking for a border color, or a big gold star, or if hovering over/clicking on a link in an unsigned message displayed an annoying warning but no warnings were displayed in signed messages; sufficiently annoying warnings do help in making cryptosystems more effective in practice:
They could deal with the email issue in a number of ways, all of them causing the security, systems, and network teams hassle, but not end users.
For example, blocking all outgoing SMTP traffic except via approved internal relay servers would make tracking these millions of unexpected outgoing emails much easier. Most organisations already put these kinds of restrictions in place, it seems Oxford don't.
As far as I can see, their temporary blocking of Google Docs access did nothing but annoy users, cause them to lose face amongst users, and in the long term make users less likely to cooperate with the security team.
continuous phishing attacks via google docs? no, not quite. some collaborators and myself studied this a couple of years ago, it's a minuscule part of the phishing problem.
Just because you saw a small amount from Google Docs doesn't mean that Oxford isn't seeing a large amount, or large enough to concern them. If you're a researcher, you should know that you can't extrapolate your dataset to everyone.
My own employer has farmed out the admin of student email accounts to Google (wisely IMO). The system had some initial glitches I admit. On a few occasions (if I'm not mistaken) Google banned email originating from their own system. In other news, many faculty use Google Docs for communication with students, and a lengthy disruption would be a big hassle, at least for me.
(e.g., Hotmail dropping all emails coming from Oxford)
This can be a problem for universities in specific ways. Students get emailed some change to course work, all students using hotmail don't get the email, students then have a case to appeal the (possibly) worse mark they received.
Sure. Which is why after so many people complained it is already back up.
Did you miss the recent articles about how spreadsheets were essential to many people? I know many SMEs and individuals which are doing nearly everything besides email in Google Docs. And what do they use email for? To send PDF of Google Docs to those that don't have GMail accounts.
In the recent "tools of the trade" for HN readers one of the webapp that came out the most often was Google Docs. It is also, quite arguably, seen all its functionalities and seen that it's made by a team of, what, 600 Googlers (!) the most advanced webapp ever.
I think you really don't realize how important Google Docs has become. And it is growing by the day.
There are people who create a GMail account only to get Google Docs after they've seen a demo of it.
Misleading headline. They blocked it for a few hours until n people complained. There was more legitimate use than expected, so they unblocked it again.
The real question is, as IT professionals, why would there be more use than expected? Would you expect the premier free cloud competitor to Office to be heavily used?
It's as misguided as most of the IT departments I've had to deal with blocking browsers other than IE because they are "insecure". No the other browser are not insecure, they just haven't bothered getting up to speed on the security profile of those browsers and confuse getting regular security bulletins about IE to be the same as being "secure".
Yes, they should have known better. Google Docs is used a lot at universities because of its collaborative abilities. If you need to work with several people putting a report together, Google Docs is a great way to get started. We often eventually take it out of Docs into a desktop program to finish it off, but Google Docs is one of the best ways to collaborate.
How the IT department didn't know what its students, faculty and staff were doing is kind of hard to believe. For students and teachers in particular, Google Docs is a big deal. It's not just because it's a cloud version of Office, but rather that it has things that Office can't do that are especially important in a university setting.
There are multiple ways to know what your users are doing, and there is more than just monitoring traffic. They can do surveys and qualitative studies, and they would have then known how widely used Google Docs was.
A good thing would be for them to adopt the policy that "even though I don't use a particular program / website, it doesn't mean it is not used, or not important". I've seen some ridiculous examples of hubris of university sysadmins causing pain for everyone else.
funny, each security bulletin implies something was wrong with security before the bulletin. so regulars implies less security not more, the exact opposite of their thought process.
I hope that such nobody confuses doing something like that with a legitimate way of running a network. Deciding that you'll make an exception for popular sites makes things worse instead of better.
I currently work for the web communications part of a small-to-medium size university. We have around 2000 employees and 8000 students. We embrace all google products on campus. We actually use gmail for our primary email system. We use google forms to collect data throughout our website (not perfect by a long shot, but makes data collection approachable and accessible to end users). We would never shut down google forms. We simply couldn't. We regulate mass email by only allowing a select few individuals to email to all users. We have literally a dozen or so users on campus that can send an email to all users, and most are in the communications department or IT. All this talk of authentication systems, and teaching users not to get caught by phishing, sounds like "ideal world" solutions. Our solution is simple. If you want to send out an email to everyone, send it to a central authority that can approve the sending. It is easier to make sure a dozen people have the skill to send a mass email appropriately and avoid phishing attempts, then it is ten thousand. Also, it has the added advantage to allow us to consolidate less urgent emails into a single newsletter once a week, keeping faculty/staff and students email boxes free of non-urgent notifications. I'm not pretending we have a perfect solution, but it seems like we'd never get approval to stop using google docs in a situation like this. I'm actually rather impressed by Oxford's ability to react and then write a long and thorough explanation of their actions.
> Our solution is simple. If you want to send out an email to everyone, send it to a central authority that can approve the sending.
It sounds like all you are doing is regulating access to some sort of all@university mailing list. How does this solve the much bigger problem of spammers using compromised accounts to spam Gmail/Hotmail addresses, which then end up getting the university blocked? And even ignoring that how does it prevent people from just looping through a list of your university's email addresses and sending them one at a time?
You are mostly correct. We are primarily regulating access to a all@university mailing list, but we also have restrictions that prevent mass emails being sent via gmail (though I'm not the authority on this). You are correct, nothing prevents a compromised account, that I know of, from sending out emails one at a time to an list of users, though we do have control over all email accounts and can disable a compromised accounts. If the traffic is internal we have other ways of preventing it. I'm not saying our solution is an absolute substitute for all combinations of possibilities. Just that if we were to be blocked we'd have to deal with it in some other way then to disable google forms. We just couldn't get away with it, and according to some of the comments, Oxford couldn't get away with it very long either.
Summary of the blog posting:
Google Docs forms are being used in phishing attacks against stupid users. We closed down Google Docs. It didn't work and we had to open it up again after 2.5 hours.
Unfortunately, there's no easy solutions to so-called phishing attacks other than educating users. I would recommend that the IT dept. dedicate its considerable resources and creativity to that end, and try to minimize use of the shotgun approach in the future!
The only effective solution is to educate users, but that in itself is a difficult task.
Phishing attacks rely on users being gullible / distracted / ignorant. Telling users _not_ to be any of these usually results in angry answers such as "Are you implying I am stupid !?", and the important part of the dialogue where you explain things to be wary of is completely ignored.
Another way to communicate these things it to _phish your own users_. Email them a fishy message ultimately asking them their password for instance, the same way an attacker would. Of course, some phishing emails / sites look incredibly legit but in my experience most have noticeable deficiencies. If your users can spot at least those, then they can protect against a good number of attacks.
Once the victim falls for the trap, redirect them to a page explaining how they were tricked, and showing what they need to pay attention to.
You even get their passwords, so that you can do some analysis and see how many will change it following the 'incident'.
It's bad if users are trained to only recognize _your_ phishing attempts :-)
I'm not sure I understand which users jacques_chester is talking about.
There are users that can recognize phishing, and they are entitled not to care about your teaching. And then there are those that can't recognize phishing - or perhaps don't even know about it - but I'm pretty sure any user would start caring when they find out someone else can gain access to their email/bank/facebook/whatever online service they use if they aren't careful.
To avoid training users into thinking it's another drill, perhaps it's a good idea to 'attack' them at random intervals, and wait a few months before repeating (thus giving you enough time to prepare the new attack; giving the users enough time to forget about the threat, and to account for new arrivals).
I'd rather be embarrassed by the local BOFH, rather than be a real victim
This is a British university, not Goldman Sachs. The action was a dramatic, low-cost effort to get users' attention, educating users, if you like (from OP):
> While this wouldn’t be effective for users on other networks, in the middle of the working day a substantial proportion of users would be on our network and actively reading email. A temporary block would get users’ attention and, we hoped, serve to moderate the “chain reaction”.
I took that to mean, we were planning a longer term outage, but when it inconvenienced Someone Important, we were forced to reinstate the service, and to cover our rears, we're now claiming it was planned as a 2-hour outage from the very start.
I feel for them. I attend an IT-focused university that has both hardcore techies (computer science and such) but also a lot of non-techies (communication, UI design, etc.)
We frequently (at least once per month) get a phishing e-mail asking us to reply or click a link and provide our credentials. For anyone who has attended the university more than 6 months, there will have been at least 3 e-mails from the IT-department telling people to not ever, in any way, give out credentials. Yet, for every phishing mail we get at least 3-4 accounts get compromised (out of ~1500), and more would get compromised if the IT department weren't quick to block traffic to the offending URLs. And again, this is in a crowd that should be somewhat unfavourable to scammers (as most of us know and can recognise such attempts).
You can try to educate your users, and you should, but just know that it only minimizes the risk, it will never, ever nullify it and if they can send 1 million e-mails from just 1 account, then it is practically a dead-end in terms of stopping the scammers. I can completely understand why they are blocking Google Docs, it's a matter of settling for the "lesser evil" solution.
I've had 4 emails in the past month providing information about the phishing emails from my department, JCR and IT services, and despite that a number of accounts still got compromised.
Couldn't agree more about education never actually fixing the problem.
I wonder how many of the keyboard warriors in this thread have any experience of running very large and incredibly diverse networks like Oxford University's.
The guys handling security for Oxford are highly experienced and capable. Oxford's network is far more complicated than a typical University.
Yet they apparently have not implemented 2-factor authentication or rate limiting for students' email accounts...
As others have pointed out, there are a few very simple ways to deal with this sort of thing. Rate limiting alone would like take care of the problem. This is probably a simple config update on the smtp server.
Google was picked on b/c it was an easy target. I'm sure there are plenty of other fishing sites out there that don't use Google, yet those weren't blocked. This a seriously boneheaded way to go about things. Unless you are just going for media attention.
they run the mail on Microsoft Exchange I think and I don't think there is an easy way to use 2-factor authentication with Exchange (as opposed to Gmail).
There are numerous solutions for two and multi factor authentication with exchange. PIV/CAC as one example. There are also other soft solutions for 2FA.
have any experience of running very large and incredibly diverse networks like Oxford University's
The fact that they do something doesn't mean that they do it well. As others have mentioned, email filtering in Exchange (bizarre email platform for a university, but ignoring that) seems like a rudimentary starting point here.
"We have to ask why Google, with the far greater resources available to them, cannot respond better. Indeed much, if not all, of the process could be entirely automated."
The problem lies with the people on the Internet though. I doubt the whole thing could be automated because of the simple fact that there are people out there who, just to troll, would and probably already zip through plenty of legitimate public Google docs and click the "report abuse" link at the bottom of each page.
The result is most likely an overwhelming amount of reported "abuse" pages are most likely legitimate, which is why actual malware docs don't get dealt with in a timely manner. Its like when people prank call 911, which could lead to actual emergencies not being responded to immediately.
I think a reasonable solution here would be to only automate reported abuse with a registered account, and only automate a certain number per account per time. it wouldn't be insurmountable to abuse, but you push the barrier for abuse over the average troll's willingness.
not a perfect solution, but would help cover a sizeable volume of this kind of phishing attacks.
So if the real problem stems from the Oxford mail accounts being hacked and then used to propagate the phishing attacks, why not concentrate on that?
You should use 2-step authentication for the email accounts, so that randoms in some other part of the world can't just hack in to an email account and use it.
I was at SBS, and we were on Mircosoft Exchange servers for email I think. Unfortunately, afaik Microsoft doesn't offer 2-step authentication. Instead of blocking Google Docs, you should be moving all email systems to Google Apps so you can use their better security. We just did it at my company for a few thousand users and several domains - I think you could do it too.
May be im wrong but why not set LIMIT of only X no. of mails can be Sent/Minute via user account.
Find out how many emails people usually send per minute/hour and just DENY relaying anything else over that limit. That way it'll be less profitable for spammers to acquire user account details if he/she can only sent X mails every minute.
If a fixed login/password pair is enough for someone from external network to send mass e-mail via your network, you have a problem.
Obviously I know little about their network so I'm probably already sounding arrogant but there are some solutions that (generally) have better inconvenience/security ratio than just plain login&pass. Especially if you account for the inconvenience of getting the whole site blacklisted. My site uses one-time, limited-time passwords to authorize external connections but the users are tech savvy so I'm not sure if it works in general settings.
Sometimes I wonder what the world would be like if it were illegal for institutions to block sites. It shouldn't be too hard to imagine. No one can block postal mail or telephone calls (except as a user). And, the FCC has banned wireless jamming. In spite of those guarantees of service we manage to survive and, on the whole, protect ourselves from fraudsters.
I think it is too late now to guarantee service through legislation, but the upsides do outweigh the downsides.
User education is not the way to solve these sorts of problems. The proper way to solve the problem is through automation -- use of a "forcing function." An example of a forcing function is not allowing an automobile driver to shift into reverse until the they have their foot on the brake pedal. This is a far superior solution to educating drivers to not shift into reverse until they have their foot on the brake pedal.
Google needs to implement a forcing function with Google docs so that their software is not misused on the Internet. No amount of user education will fix the problem -- only some sort of forcing function will fix it.
I haven't thought about it at all as to how Google would fix their problem. Still, they've introduced a component into the Internet ecosystem that has been found to be abusable and they are accountable to install the forcing functions to prevent that abuse. To depend upon user education is simply irresponsible.
The idea of forcing functions is well known in organizational/system theory.
Another way to think about this is the recent notices that Java (and at other times Adobe Flash) has recently introduced a security flaw where people using their computer can have it hacked into (Apple suggests removing Java unless you really need it).
Just as we would expect Java/Oracle and Adobe/Flash to fix their security flaws so should Google fix theirs.
This is a very different sort of security flaw than with Java and Flash. Google Forms was allowing users to create forms asking for information and then send links to other people asking them to fill them out. Which is exactly the intent of the program.
When you ask for a "forcing function" you're requesting a way to let people create forms asking for information in general but not letting them ask for information that people aren't allowed to give out. This may be possible, but it is at least very difficult.
This kind of black-listing of specific domains is, unfortunately, just a game of whack-a-mole that's very hard for defenders to win.
If they're seeing targeted phishing (which the article implies that they are), then the attackers will just observe the drop off in people following the links and move the phishing forms to another domain or service, making it very difficult for the admins to keep up.
Really addressing this kind of problem has to come down to a combination of awareness training and improved authentication techniques (i.e. move away from static username/password combinations)
How about putting a middle page up with a warning?
So a student on the university network clicks a link to google docs and a warning appears warning of potential attacks using google docs, be aware, and click next to continue.
I don't think it's do-able, given the nature of SSL.
TBH, many faculty and staff will be just as bad; though, I question their assertion that anyone interested in a Higgs Boson would let a Uni-admin task distract them. Some physicists may not have any common sense, but are generally more tech-literate than average.
Could they not just block google forms? I don’t see many users entering their username and password into a PowerPoint/Word Document.
Perhaps they could implement some more advanced email filters, e.g. removing all links to google docs, instead of blocking the service for all users?
I'd imagine a mass of the user-base of Oxford uses Google Docs for important things, from group work on a PowerPoint/Word doc, through storing their work in the cloud without the Office Suite.
"Another is that traffic is encrypted. Many educational establishments will have some capability for filtering traffic to malicious URLs as it flows through their network. That’s easy with unencrypted traffic. If the site uses SSL, then you have to do some kind of SSL interception."
Network Admins need to learn that looking at what your users do and meddling with his data is not a legitimate activity. They should have learned that long ago. Fortunately, with encryption becoming more widespread, they will have to learn the lesson.
"That’s easy with unencrypted traffic. If the site uses SSL, then you have to do some kind of SSL interception. Straightforward on a corporate network full of tightly-managed systems. Much harder on a network full of student machines, visitor laptops and the like, and in our opinion, something to be avoided."
Obviously, they do not see intercepting traffic as desirable. But who has time to read the article before commenting these days...?
Does SSL prevent the network admin from seeing the URL that is being visited? I did in fact read the article in full, but I was under the impression that the encryption just encrypted the data in the request.
From a read through of the Wikipedia article on SSL it's now clear to me that all HTTP headers are encrypted, including the requested path.
> Does SSL prevent the network admin from seeing the URL that is being visited?
My understanding is that SSL establishes an end-to-end secure channel, then HTTP is inside that channel. Consequently, GETs and POSTs are not visible to outside parties.
If it's your network, and you graciously allow me to use it, and I, through my use of your network breach the security of systems on your network, would you not do anything in the interests of not meddling with my data?
That's called operant conditioning. It probably won't work in this case because the consequences of falling for a scam are usually not felt by the victim.
Teaching users is an O(N+T) solution with N users (term comes from time spent teaching), T total time spent on computers (term comes from time spent being cautious).
How about breaking down the email domains into students, faculty, departments, collages etc. That way it's less disruptive across the board when domains are blocked.
Here's what's next: Oxford blocks roads because criminals are using roads. Oxford blocks food deliveries because criminal are using restaurants to eat.
Seriously now: what's the Microsoft rebate Oxford got for taking such a measure?
If misleading messages ("phishing") are leading their users to enter credentials onto forms which are then used to send out spam, then the solution is not to block access to one of the sites that supports forms. There are an unlimited number of sites that support forms. There are LOTS of better ways to solve this problem. Here are a few:
* Train your users where it is and isn't safe to enter credentials.
* Don't give your users credentials. Have some alternate way to authenticate them like a login token.
* Put rate limiting on the ability of a single account to send out emails.
Blocking the site for just a few hours as an emergency response to a short-term attack is a much more reasonable approach. Sometimes, to react quickly, you need to take measures that are not the best possible choice. But there were better approaches, and the security team should take measures to ensure that they can react more effectively next time. For instance, in this case, a single mass-email or email "virus" had gone out and was tempting a large number of users to give out their credentials. Instead of blocking the site that was collecting the credentials, a better solution would have been to remove the email from the mailboxes of all the students. After all, the emails system is provided by the university, and this cuts off the problem at the root. They should institute the necessary technology to support doing this next time they have a phishing problem... perhaps they can even do this proactively: set up some honeypot accounts not receiving any legitimate emails and automatically destroy any emails matching the signature of emails received by these honeypot accounts (with manual review afterward to correct for false positives).