Hacker News new | past | comments | ask | show | jobs | submit login
"If you’re outraged by accidental breaches, you’d better sit down" (benlog.com)
58 points by sweis on May 17, 2010 | hide | past | favorite | 19 comments



I hope we're really close to a Responsibility tipping point in regards to online security, by which I mean a general acceptance by internet users that they, not the site they visit, must take primary responsibility for their online security.

This doesn't remove any responsibility from my bank, paypal, Facebook etc. But until the average person hears about a potential breach of their privacy and thinks "What can I do differently to prevent this happening again?" then breaches are going to be fodder for sensational journalism and outraged users... neither of which helps solve the problem.

I will make a division between some core security - like my credit card details on Amazon - and peripheral security - like my photos on Facebook. This latter type will tip over first, and I hope it's soon.


But what could I do differently to prevent my bank, or employer, or sites I shop at, or anybody else, from leaking my data? Isn't it pretty much their responsibility to make their site work properly? It's my responsibility to not get taken in by phishing scams, sure, but what am I supposed to do about a bank accidentally losing a CD with accounts on it, or getting their database server compromised?

If anything, I'd suggest it should tip over into the other direction, with much stricter liability and penalties for those sorts of information breaches. If somebody promised something and failed to deliver it, that's a pretty classic case of fault. Now if, on the other hand, they said up front that they might make my data public if they felt like it, then that's another story.

In Facebook's case, it feels a bit like the flip-side of their recent proposal to make TOS legally enforceable. Perhaps make 'em legally enforceable in both directions, then, as a real contract with obligations on both parties?


I agree. Other than trying to avoid phishing scams and using sensible passwords there's not much that the user can do to improve their digital security, especially in the era of cloud computing where much of what goes on from a software perspective is not within the user's realm of responsibility.


"I hope we're really close to a Responsibility tipping point in regards to online security, by which I mean a general acceptance by internet users that they, not the site they visit, must take primary responsibility for their online security."

What does that even mean? Should I be fuzzing any site I think about giving my e-mail address to? There's not even a feasible way of knowing if someone's administratively accessing my account.

On the Internet, like in life, you have to end up trusting somebody.


Bollocks. It's unreasonable, not to mention unrealistic, to shift the balance of responsibility for online security from the relatively small set of professional developers to the mass of individual, non-expert users.

Imagine if banks had a policy that their individual customers were responsible for ensuring that their deposits were secure - banks would collapse regularly (and in fact did collapse regularly before the introduction of deposit insurance).


It would help if users didn't give instant, unconditional, blind trust to cloud services, as the vast majority seem to now. Just taking a moment to consider security before dumping their data in a service they don't really need could make a big difference. But learned helplessness is a tough habit to break.


This study by Microsoft Research might interest you. It argues that the individual user rationally ignores security precautions because zis expected loss is less than the cost of a strong security posture.

http://research.microsoft.com/en-us/um/people/cormac/papers/...


The thing about the Yelp/Facebook bug is that it demonstrates how some of these features have pretty fundamental problems. Facebook gave Yelp full access to their API in a way that meant that any XSS holes in Yelp would result in a breach for Facebook. Yelp had an XSS hole.

Keeping a large site free of XSS is really, really hard (especially if you don't have an escape-by-default policy baked in to your template layer). Ensuring your partners are free of XSS is even harder.


It seems to me that the major problem is that there is no gradients in security.

With my online bank I am forced to go trough the same tedious process whether I want to check my balance or transfer money.

I think that is part of the problem in getting users to care about these matters.

My own thinking on this is "The Ghost Protocol" peer 2 peer negotiation where trust is build over time rather than defined by a login. This wont be happening anytime soon but the username/password approach isn't going to cut it moving forward IMHO.


The world is not as black and white as this article suggests.

"We do not know how to write secure software."

100% secure? No, perhaps not. But we know a few things that could be done much better than they are today.

For one thing, you can't leak information that you don't have. Companies today tend to default to grabbing more information for their databases and keeping it around indefinitely. We need much stronger data protection laws than most countries have today to counteract this.

For another thing, "pushing the envelope and releasing features as quickly as possible to outpace your competitors" is not an excuse for not running a decent software development process. Seriously, Google was collecting that data for three years and no-one had looked at any of it and thought something was wrong? The cost of phishing and identity theft even in purely economic terms are already significant, and those costs are rising and don't take into account the more important human suffering aspect. A statutory fine of, say, $10,000 per individual per negligent breach should focus the attention a little more on getting basic testing right and doing something about problems before they happen, and a little less on trying to infringe everyone's privacy faster than any competitor can.

As I have noted before, such a level of fines would be an unacceptable risk for many businesses and they would probably have to stop collecting and retaining personal information at all. I have no problem with this, as long as the negligence criterion is applied sensibly. Most companies don't need to store a lot of data about us, and those for whom it is an important part of their business model can learn to take reasonable steps to prevent abuse.


As much as I would like to see this, without a stable industry consensus on best practices, there is insufficient foundation for a legal definition of negligence.


An alternative possibility is that evidence of "reasonable" behaviour would be heard on a case by case basis in court, and case law would start to show the minimum standards expected. Some practices may be more hype than substance, and anyone alleging a problem because something was not done would presumably have to justify why that action should have been taken. On the other hand, I think you would find a lot of consensus and evidence-based argument in some areas of security, and there is really no excuse for not following those practices if you're in the business of managing sensitive data.


If we are going to shift responsibility from attackers to the attacked, there needs to be a clear and reasonable way for security non-experts to ensure that they are doing their due diligence, and for non-experts in the legal system to be confident that they aren't.

For now, any clown can come along and tell someone that they are secure, and another clown can verify that the first clown is an official certified non-clown. It may be perfectly obvious to you that these people are incompetent, but to the non-expert it ultimately comes down to your word vs theirs. Even the clowns may not have an objective way of knowing that they are clowns.

Without an authoritative standard that provides a root of trust to non-experts, everything falls apart. That standard requires consensus, which requires a critical mass of competent experts, which requires the state of the art to keep up with demand and change, which is not going to happen in the field of security any time soon because there is too much demand, too much change, and it's just too damn hard.


> If we are going to shift responsibility from attackers to the attacked, there needs to be a clear and reasonable way for security non-experts to ensure that they are doing their due diligence, and for non-experts in the legal system to be confident that they aren't.

I'm sorry, but I don't entirely agree with that. I think if you deliberately take on a role that is potentially damaging to others, such as collecting large quantities of personal data, then to some extent the onus has to be on you to be competent in how you protect that data.

The whole premise of using words like "reasonable" in statutes and then developing case law over time is that you can't always predict every single possibility in complete detail ahead of time, and sometimes you have to make a judgement in the circumstances of a particular case about whether someone's actions were reasonable.

This problem isn't specific to IT security, of course. After all, any time you employ a new member of staff as a company, you are to some extent trusting that they (and perhaps their references) have been honest in their representations to you during the recruitment process. And yet, we still have businesses with employees, even though it is an uncertain world, and for the most part this works fine anyway.

> Without an authoritative standard that provides a root of trust to non-experts, everything falls apart.

I respectfully disagree. It doesn't take a big, formal spec to understand that you should hold sensitive information behind a secure access mechanism, and that sensitive data should be protected to avoid leaks in transit or on disposal of hardware.

If someone doesn't understand and follow these basic principles, then perhaps handling sensitive data is not a good career choice for them or their business. I would have no problem with a legal action against an organisation that compromised, say, many people's bank credentials, by leaving them on an unencrypted USB stick on a train.

Arguing about whether a certain state-of-the-art or controversial security process or tool was necessary and whether failing to use such a thing was negligent is for courts to consider, based on the testimony and possibly expert statements from both sides. If anything, this process and a reasonable burden of proof on the prosecution seem more sensible to me than trying to reduce a fast-moving field like IT security to a fixed checklist that might be obsolete within days of completion.


The things you describe are reasonable in general, but break down under the extremes of present day security, software development too. The problem is that most people who think they can do it, can't. That is, most professionals will take on more challenging problems than they are practically qualified for. If the bulk of an industry can't judge its own competence, customers and courts have no chance of doing so. And yet this is increasingly a requirement for doing business at all. You can't simply tell people to stop using computers.

If you can't define "reasonable" for the purposes of writing a law, it's a good indication that people won't know what it means when they try to obey the law. You can't make laws that nobody knows how to follow.


The chat bug was pretty stupid, how on earth did they manage to create a bug like that (seriously, it amazes me)! Seems like they have an awful authentication system - these things should be checked over before going online.


This is right on the money.


[dead]


Don't just downmod it - click here and flag it:

http://news.ycombinator.com/item?id=1353889


What on earth was it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: