Hacker News new | past | comments | ask | show | jobs | submit login
Information Security: "We Can Do It, We Just Choose Not To" (hezmatt.org)
88 points by bo0tzz 3 months ago | hide | past | favorite | 77 comments



It's kind of bizarre to me that people now days don't think of their address as public information. In the US at least, until about a decade ago, we would get a physical book in the mail every year of everyone in our metro areas address and phone number, for free, whether we wanted it or not.

You enter your abode via the public. It's really not hard for someone like a stalker or private investigator or paparazzi to figure out if they can find... you. That is to say if you go in public. It's readily available to anyone who actually wants to seek it out, and it's something you do in public. It's not particularly private information in the practical sense.

Every time my address is leaked online, I am not worried in the least. I don't really need to do anything. It's readily available on any number of information sites online anyway.

When my credit card gets leaked, it's a major headache regardless of if I am liable for charges. Now I have to cancel my card, change all my automatic subscriptions, reenter it on every single shopping site. It will take literally months for all that to resolve.


That's the naive black and white way to think about security. If it's possible to learn something with a bit of work then why bother keeping it secret?

The answer is that the bit of work - "really not hard" as you put it - actually can be quite hard, and it is a real deterrent.

You don't really care about whether something bad can happen... you can about whether it is likely to happen. It's a probability. And making it hard to find your address reduces that probability.

Also I would suggest that HN members are unlikely to be stalked. You might feel differently if you were a Twitch girl or whatever.


I mean isn't security by obscurity generally accepted as bad practice?

If everyone treated it as easily available data, and stopped using the act of having it alone as "proof" of anything we could be much more secure. E.g. merely having someones address should not be enough to get their house swatted.


This catch phrase can be used to prove too much. All camouflage is bad? Hiding is useless? No, of course not. These things have deep, evolutionary roots, as a way of getting an edge in nature and in war.

In its original context, relying on obscurity alone as your only defense isn't recommended when there are better alternatives like real authentication and encryption. Also, hiding isn't an option for things necessarily done in public.

But it's still defense in depth when you can do it. People can just show up at your doorstep and that's a hassle or worse.

There is also the crazy ex scenario. We should probably avoid assuming everyone has the same security needs.


It is considered bad practice when used instead of an obviously better alternative. E.g. running a service on an obscure port instead of using a password. Or having a hard-coded admin password instead of forcing the user to pick one.

But when it's in addition to good measures, then it generally improves security.


It's defense is n depth.

I used both obscure ports AND strong passwords or keys.

Whether we like it or not, in practice the obscure ports stop a ton of drive by bots.

Even if you are using a keys, the obscure ports stops the bot with a 0day exploit from getting in.


> I mean isn't security by obscurity generally accepted as bad practice?

That's an oversimplification. Obscurity is generally a really thin layer of security - not nothing, but if people think of it as "real" security then they neglect other things and just have the inadequate layer that is obscurity. By way of analogy - if you add a 3-character password to a system, it is strictly more secure than without that password. But if you think "oh, I have a password, so I'm safe and don't need anything else" then you will get owned the first time someone takes an actual run at your security. A system that depends on obscurity is probably doomed to failure, but that doesn't make its value zero, just low.


Think around the edges: you could opt out of phone books because some people really needed the ability to do that, and that was in the era before bulk operations were easy. Someone targeted, yes, you could look them up and even get phone books from other areas, but the kind of data-mining and cross referencing which are now trivial were cost-prohibitive.


At least in the US, property ownership has been public record. You can look up who owns any parcel of land. You can't opt out of your address being public. It's just somewhat easier to search this data in today's era.


Not everyone owns a house, and people with privacy concerns have for a long time used companies or trusts to hold property or to simplify family ownership (my next-door neighbors are nowhere near rich but used a family trust to simplify inheritance when the owner started getting old).


There are apparently ways to opt out of that too, with a land trust or shell corporation.


Every time one of these garbage companies sends me a letter telling me they lost my information in a breach but don’t worry they are giving me FREE CREDIT MONITORING!!1! they should have to put $50 inside.

I think that would go a long way towards solving the problem.


Credit monitoring is really an indictment of the entire system: “if our negligence combines with someone else’s negligence, you might find out sooner”


A large group of Americans instead view this as proof that capitalism is working. (I’m not one of them) Checks and Balances, nevermind that we lack the leverage to extract more than “credit monitoring and lawyers get paid” or that it requires a civil judiciary that we pay for.


I don’t think lawsuits as a regulatory mechanism is an intrinsically unworkable idea, but clearly the system as it stands isn’t working very well.


Yes, the companies funding the libertarian movement have gotten their money’s worth back many times over. One of the most obvious examples of why this isn’t sufficient is that the arrangement is unilateral and there’s no practical way to opt-out of binding arbitration (which should be illegal) or to negotiate a price based on actual damages.


No. That would just put a market price on your dignity. Fifty bucks is cheap that they'd happily pay you that.

Please forgive me... I don't mean this as a personal insult; but a better system would be where you get fined $50 for being stupid enough to give them the data in the first place.


> where you get fined $50 for being stupid enough to give them the data in the first place.

So … you don’t use banks, utilities, phone companies, healthcare, etc. and don’t apply for or accept non-anonymous jobs? This isn’t optional in many cases, which is why it really needs to be covered by legislation which shifts the cost to the company collecting that data.


A better system would be one where the company paid a fine starting at $500 per record. Also, not all consumers who have had their data stolen gave it to the organization willingly.


Completely agree on this. A company-killing level of fine, based on number or records exposed, is appropriate. Then the insurance companies would not ensure companies unless they passed stringent audits around best practice and data hygiene.


There ought to be case studies about Transunion in the same classes in business schools where they discuss Bear Stearns.


I’m no more insulted than if you told me I’m a fool because I don’t live in a cave in the Himalayas chanting mantras. Our visions of a life well lived are too different for you to be able to insult me.


> Our visions of a life well lived are too different

Well that didn't come over as humorless, tone-deaf, pompous or over-sensitive at all, thank goodness. I'll just get back to grunting and waving a jaw-bone in my cave, eating grubs and worms and smearing myself with my own faeces, then, shall I?


Matthew 7:3-5


It depends. Some industries would be able to happily pay that. Others have tight enough margins that they might feel real impact. Even 1 million users x $50 is a huge sum for most companies.


Data breach insurance is not helping.

Using weak passwords, leaving credentials where others can see them and downloading infected files can all lead to compromised data. Data breach insurance is specifically designed to protect a company in the aftermath of such an unexpected event.

Business correctly recognizes data breaches as a risk. The insurance industry allows companies to export that risk to them. Data breach insurance pays for the financial impact to the business as a result of a data breach. This does not protect customers and in-fact creates misaligned incentives between a business and its customers.

One solution would be to legislate that insurance is not acceptable for an organization to mitigate cyber risk. States and the federal government could do this by passing a law. I don’t see something like that getting passed though. The insurance industry and every business, both large and small, will lobby hard against it. You’d really need a strong grassroots consumer advocacy group to push hard for this, something that tells people’s personal stories to the media.


We don’t need legislation, just dramatically increased penalties for a breach. Now insurance premiums drastically increase unless you’ve done x, y, z.

I’m not all “free market solves it all”, but it definitely works well at balancing money through the system. We just haven’t correctly priced a data-breach.


Exactly. At $500 or $1000 per record, PII starts to look highly radioactive. Companies would be avoiding it's collection with a passion. And those that had to hold it would be compelled to do so less stupidly.


My first thought would be that insurance could be workable, but it might be too cheap right now. I would expect that when a company signs up for such a thing, they get audited and charged according to risk - a well run organization might find it a tiny cost, while say a company storing passwords in plaintext might find that their monthly bill is ruinous because they're practically guaranteed to be breached and subsequently fined into oblivion. Insurance shouldn't so much remove risk as amortize it.


Yep. Seen it in action.

If we store credit card information we need to be PCI compliant. Let’s outsource that then.

All stored personal information needs to be PCI compliant.


You have no obligation to anyone (except the government) when they ask for your information. Lying is a valid defense to this infosec ineptitude. Virtual credit card numbers are also a leap in control. Use them.


You do have a legal obligation for anything financial where KYC laws apply, and I wouldn’t try that with airline tickets, either. You might also have issues around breach of contract if you’re lying about something which might have affected their willingness to allow service – this is likely not to go beyond cancelling your account but you should think about the magnitude. Nobody’s going to care about your grocery store loyalty program, but if there’s money or copyrighted material involved you might want to weigh your willingness to be a legal test case.


Airfare can be bought with virtual CCs. REAL ID requirement is another story altogether.


An alternate or temporary card number isn’t providing false information. It’s also less sensitive since they know not to store it and, at least for Americans, your liability is lower - you can rotate numbers easily, but not your PII.


Exactly. Just give false info, even when buying products, so when a breach occurs you won't have any headaches whether what was leaked and how.


You can argue for fines and prison sentences but on HN that won't accomplish anything. Here the only workable solution is a technical one, plenty of expertise available and people able to implement stuff.

I do security by not having things I don't need, printing documents and deleting the data. It's not perfect by it self but it is something we could model in hardware quite well.

One-way tubes seems pretty easy.

For access one could give each employee a query quota and if they exceed it have someone else increase it temporary or permanently.

One could also make a dumb console that displays data on a screen, db tables, pdf files, images.

Could build some business logic in hardware. More often than not the need for access is triggered by something. If the customer calls you some of their information can be displayed. Accessing it in the days after that isn't dubious.

It takes a lot and makes things more complicated but in the end you do get nice small data sets to work with.


The GDPR may be a pain in the ass to properly implement, and certain parts of it are a bureaucrats wet dream, but it sets the right incentives. If you read the general rules, it’s just common sense: Only keep what you need, take as many steps to secure data as you can, tell users proactively what you like to do with it, ask for their consent, and delete whenever they request it.

It all sounds like lots of additional IT work, and it is (I spend a lot of time in our company to try and improve). But it only seems like a hassle because we went for so long without doing it right.

There must be a way to let human dignity be the lowest common denominator for shareholder value…


I like this description of having major problems with GDPR as usually either being because you actually are abusing people's data, or because you've run up a huge pile of technical debt related to data handling: https://reddragdiva.dreamwidth.org/606812.html


No, one cannot just comply with the "general rules" of GDPR, you have to comply with every last letter of the considerable legal legislation. The fact that the rules can be generalised to a reasonable few paragraphs is meaningless.


That’s just not true. I’ve consulted with a few privacy legal agencies and spent a lot of time evaluating the law. Some sections are even worded in a way to allow wiggling room for prosecutors, or require good will on your part. What would even be supposed to happen if you weren’t „compliant“? In the end it’s always about specific kinds of misconduct, and that means fines. The amount of a fine depends on the severity of the misconduct. The GDPR isn’t different at all from other laws in that regard.

If you’re found to be in breach of the GDPR, the severity of the breach as well as the amount of negligence or malevolence on your part is taken under consideration to decide on the fine. The prosecuting attorney also doesn’t have to actually fine you if it’s clear you put in effort and acted in good will.

For a concrete example, a startup usually isn’t required to provide a fully fledged data deletion policy, but if you cannot roughly outline how you intend to handle people’s requests to delete their data, that doesn’t look good. If you don’t even have some sort of privacy policy on your website, that looks worse.

Nobody can implement the GDPR 100%. But you can try to handle data responsibly, and if someone discovers you don’t and you try your best to fix the error (which is on your part, mind you), nothing draconian is going to happen.

And we’re still talking about basic respect towards your users or customers here, it’s not like someone asks something ridiculous of you.


> you have to comply with every last letter

Cite, please.

Perhaps regulators in diferent countries take different attitudes; in the UK, it's very soft-touch. Only the most egregious, repeated flouting of the regs attracts a penalty.

As far as I can see, the Irish regulator is even softer; you could be mistaken for thinking that the Irish regulator's job is to make sure that US tech companies don't move their server sheds away from Ireland.


> Information Security: "We Can Do It, We Just Choose Not To"

Maybe not.

It's convenient to think that misaligned incentives [0] or insufficient motives [1] explain failures of infosec. These are popular explanations amongst tech people, because we want to believe infosec can work. Our jobs depend on it.

Now there are gargantuan fines, shelves of regulation, auditing and compliance, even jail time for executives. Has it fixed anything? No. If anything the landscape of breaches is accelerating. And things like Microsoft Recall, cloud "AI" services are only going to amplify it. Even if we had a "corporate death penalty" that simply shut down companies on their first breach, it would fix nothing. We'd just get fly-by-night tech companies with an average lifespan of 18 months.

What if the people who said "Data wants to be free" are right? What if data containment is impossible in principle?

Once we put aside wishful thinking, how can a technological society survive. It requires radical and brutal re-thinking of cybersecurity. How we define it. How we teach it. How we legislate it. How we address harms.

[0] Bob secures Alice's data while Alice pays the price for Bob's failure

[1] Many people don't care. Not everyone has a security mindset, not because they lack intrinsic self-respect but because they are unable to comprehend the harms.


Now there are gargantuan fines, shelves of regulation, auditing and compliance, even jail time for executives.

What companies paid a big enough fine to have an unprofitable year? Which executives are sitting in prison?

These things only exist in theory, not practice.

I’ll give you endless reams of pointless box checking exercises in the name of auditing and compliance.


They're not pointless if you're in the Guild of Box Ticking Consultancies, and guess who gets to say what the boxes are?


That’s the thing. There’s a ton of grifters and/or idiots in the compliance space. If you talk to an actual lawyer that specializes in SOX litigation, or similar, you’ll find that many of the measures your compliance or fake-infosec people are telling you that you have to do aren’t actually required by any law or regulation.


Maybe we need a government pentesting agency that fines companies without waiting for the first breach.


> a government pentesting agency that fines companies without waiting for the first breach.

I've heard serious suggestions floated for a tax and contribution funded pentesting agency that helps companies without waiting for the first breach. But I think the scale of it all is just a bit much.


Apart from outsourcing credit card data, what else could be different when that data is more secured than the more important personal data?


Not having the data at all.

It's almost impossible for those of us who've grown up in the last 40 years of commercial computing to imagine.

But it's possible to radically decouple identity from function.


Precisely. Tell it to the marketing director, you can watch him go hairless in real time.


And there you have it. The "technology industry" hasn't had anything to do with computer scientists, engineers or technology people in about 20 years. It's run entirely by marketing people.

They just keep the eggheads around as pets.


Technology is always developed to earn more monies. Progress is just a side-effect of monetary greed.

It's said thrice, but when you remove the monetary greed, you have Mullvad VPN.

P.S.: Yes, I say monies specifically, because the people who use "monies" as job parlance has the most monetary greed from my experience.


I'd imagine there is quite a lot of legal pressure against doing that. Not knowing who your customers are seems like the sort of thing that would eventually involve lawyers.

mullvad.net was interesting to me because I could pay with Monero, meaning that they may actually have no data about me whatsoever except whatever is technically required for a VPN connection. Pretty cool company but it seems like the sort of model that would struggle in most countries with the amount of financial monitoring that tends to be in place.


You're taking it too far. KYC concerns legal entities, that's a different story. In regards to individuals and their privacy, there are ways to greatly minimize personal information processing in "plain" form while keeping records (or access to records) to fulfill legal obligations.

Having had conversations with people on security and anti-fraud teams, many experts clearly share this view.


Could still be done by re-incorporating companies in countries without kyc laws. And if fines for being breached get high enough, that's probably what would happen.

That would probably be bad for tax receipts though, so it's more realistic that there's an upper bound on infosec related fines


Mullvad VPN does this. No email. No password. Just an account number. If you want to to send them an envelope full of cash, you can.


The embarrassing truth is that we can't do it.

Humans are very bad at security.


Suppose that the laws were changed so that breaching personal data meant the CEO had to personally visit each affected customer and apologize face to face. Would it really be the case that “we can’t do it” because everyone is being popped by elite Chinese military operations, or would it instead magically turn out that companies could cut into the executive bonus fund by 10% to resolve understaffing for boring O&M work, and maybe reconsider collecting so much data in the first place? As the post notes, breaches of the data where they have real penalties are much rarer.


A counter point being, most people assume other people can be trusted.

It's a small percentage of people hacking (in a malicious way) but the reach of the internet means we're all vulnerable.


Can’t in this context mostly means aren’t willing to do something with current incentives.

Remove the ability to do online or offline credit card transactions without dedicated hardware for chip and pin, thus eliminating the value of stolen credit card numbers. Are you crazy we can’t do that customers would use a different credit card!

Change the incentives so credit card companies would be personally liable for any fraudulent transaction and suddenly everything changes.


> Change the incentives so credit card companies would be personally liable for any fraudulent transaction and suddenly everything changes.

They already are, which is why CC numbers are secured and all the other important info is not. This is exactly the point of the article.


The article is talking about retailer data breaches not credit card companies.

If someone steals your CC and buys a bunch of stuff there’s 4 people who could be stuck with the bill. You, the merchant, Visa, and the bank who issues the card. Right now Visa is never paying though they still have a little hassle from such transactions. If you don’t notice you might get the bill and under special circumstances the bank might get stuck with it, but mostly it falls to the merchant. https://www.nerdwallet.com/article/credit-cards/merchants-vi...

However, if Visa/Master Card etc had some actual liability you bet they would be some real changes.


There's definitely some movement towards making raw card numbers obsolete - https://www.spreedly.com/blog/network-tokenization-explained however this would take some time


Put CEOs and shareholders in prison for data breaches and watch how magically humans become amazing at security overnight


Are you sure about that? People commit financial crimes with possible jail time all the time. Even putting the CEOs life directly in the line of danger does not work, see OceanGate.

I still agree that the punishment for these crimes is too soft, but even ramping it up to insane levels isn't going to make everything perfect.


I am sure yea… Put Musk and Zuck and the rest of them to mandatory prison sentence of no less than 5 years per breach - all problems will be solved by lunch


They really, really won't.

Like, I'm in favor of personal liability for execs who willfully sacrifice everything and everyone else for their own increased profit as much as the next guy. But there are at least two major problems with your statement:

1) The kinds of infrastructural improvements needed to genuinely increase security are likely to take significant time and money to put in place—and the money, in many cases, will also mean more time. We're talking years in some cases, even if people are moving at the fastest pace they can while still being responsible.

2) Security is a genuinely hard problem. No matter how good your procedures, your hardware, and your software, humans still have to interact with the data, and humans will always be fallible. Social engineering, blackmail, revenge, and just plain carelessness will always put data at risk, even if the company as a whole is fully and wholeheartedly committed to security.

So are you going to put the heads of your local credit union in prison if someone in their IT department is disgruntled about not getting a promotion they think they're entitled to, and decides to stick it to the man by stealing the DB of social security numbers and selling it on the dark web? (Or whatever other scenario you can think of)


> Humans are very bad at security.

Security for who. from whom and to what end?

I think many humans are bad at it. A smaller group, may be 10 percent, have the "security mindset".

For the ninety percent who use technology or run businesses it's a schlep and imposition. They just want to forget about it. And there are many psychological and cultural devices to help them ignore security.

Around 8 of remaining 10 percent are on what I see as the "dark side". They are guards for the castles, and primarily concerned with _helping_ technological abusers take advantage of the majority's weakness.


I agree that it's mostly around mindset. Most people don't really care about physical security either. Sure, many people say they care about it, but many don't follow basic safety patterns because they don't know them, they find them burdensome, etc. Just basic stuff like placing valuables out of site and locking car doors, closing blinds or curtains at night, having a halfway decent deadbolt and using it, or having protective film on your windows (depending on the area), etc.

Same thing with tech. Most people only run backups of thier system after they lost data and felt pain at one point. I would guess most people have Maybe 3 password and just reuse them across evertlything on the internet. The only people who might be more security minded are the ones who do related stuff for a living or if they had a security incident happen to them. Nobody else cares.


Companies can do it, but they refuse to, because noone knows what entity is really behind a specific corporation. So when a breach occurs it's basically a we handed over your information to whomever we collected it originally, but in a way that is untraceable.


> The embarrassing truth is that we can't do it.

If what you mean is that we find ourselves unable to do perfect security, then that's clearly true but it's missing the point.

The point of the article is that we do better security for credit card numbers than we do for other information which is more sensitive to our customers. Why do we do better with these? The author's claim is that it is because of the incentives (although, working in the industry, I would say it is also because the credit card industry wrote policies mandating specific, detailed security practices).


We can at least be very good at it, if we accept the UX cost.


The UX cost and the other opportunity cost, yeah.


We can do it, see https://qubes-os.org


I call this the Barack Obama School of Philosophy:

Yes we can. (But we won't.)


[flagged]


Yep


I didn't type that. Probably got butt written.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: