Hacker News new | past | comments | ask | show | jobs | submit login

If companies practiced data minimisation, and end-to-end encrypted their customers' data that they don't need to see, fewer of these breaches would happen because there would be no incentive to break in. But intelligence agencies insist on having access to innocent citizens' conversations.



> But intelligence agencies insist on having access to innocent citizens' conversations.

That's part of the problem. But companies also are unwilling to pay to do any of the things that you've described. There is no punishment or fine that is actually punitive. Protecting (short term) profit is more important than protecting users' data --- it's even more important than protecting the (long term) profit potential of a company with a good reputation.

Until the data breaches lead to serious $$$ impact for the company, the impact of these breaches will simply be waved off and pushed down to users. ("Sorry, we didn't protect your stuff at all. But, here's some credit monitoring!") Even in the profession of software development and engineering, very few people actually take data security seriously. There's lots of talk in the industry, but also lots of pisspoor practices when it comes to actually implementing the tech in a business.


Companies already pay for cyber insurance because they don't want to take on this risk themselves.

In principle the insurance company then dictates security requirements back to the company in order to keep the premiums manageable.

However, in practice the insurance company has no deep understanding of the company and so the security requirements are blunt and ineffective at preventing breaches. They are very effective at covering the asses of the decision makers though... "we tried: look we implemented all these policies and bought this security software and installed it on our machines! Nobody could possibly have prevented such an advanced attack that bypassed all these precautions!"

Another problem is that often the IT at large enterprises is functionally incompetent. Even when the individual people are smart and incentivised (which is no guarantee) the entire department is steeped in legacy ways of doing things and caught between petty power struggles of executives. You can't fix that with financial incentives because most of these companies would go bankrupt before figuring out how to change.

I don't see things improving unless someone spoon-feeds these companies solutions to these problems in a low risk (ie. nobody's going to get fired over implementing them) way.


The typical IT department in a large corporation is way too big to have reasonable visibility into what it manages. There's no way to build reasonable controls that work out when you have 50K programmers on staff. It's purely a matter of size.

Often the end result is having just enough red tape to turn a 2 week project into an 8 month project, and yet not enough as to make sure it's impossible for someone to, say, build a data lake into a new cloud for some reports that just happen to have names, addresses and emails. Too big to manage.


Which gets back to the original point, that the real answer is to minimize how much data is held in the first place. Controls will always be insufficient to prevent breaches. Companies and organizations should keep less data, keep it for less time, and try harder to avoid collecting PII in the first place.


I don't disagree with you but as someone who has thought a moderate amount about data security at a "bigco", I will point out something I haven't seen people really talk about...

Audit trails (of who did/saw what in a system) and PII-reduction (so you don't know who did what) are fundamentally at odds.

Assuming you are already handling "sensitive PII" SSNs/payroll/HIPPA/creditcard# data appropriately, which constitutes security best practice: PII-reduction or audit-reduction?


Let's say the CEO agrees with you and is horrified of any amount of unnecessary data being stored.

How would they then enforce this in a large company with 50k programmers? This was what the previous post was discussing.

Not to mention, a lot of this data is necessary. If you're invoicing, you need to store the names and many other kinds of sensitive data of your customers, you are legally required to do so.


Culture change. The CEO can push for top down culture change to get people to care about this stuff. Make it their job to care. Engage their passion to care.

It’s not easy, but it can move the needle over time.


That is easier said than done. In order to achieve that effectively every employee that has any relation to data needs to be constantly vigilant in keeping PII to a minimum, and properly secured.

It is often much easier to use an email address or a SSN when a randomly generated id, or even a hash of the original data would work fine.

I'm not saying that we shouldn't put more effort into reducing the amount of data kept, but it isn't as simple as just saying "collect less data".

And sometimes you can't avoid keeping PII.


There's another side to it, which you allude to with the give away of credit monitoring services that data breaches result in. The whole reason the data is valuable is for account takeover and identity theft because identity verification uses publicly available information (largely publicly available, or at least discoverable, even without breaches). But no one wants to put in the effort to do appropriate identity verification, and consumers don't want to be bothered to jump through stricter identity verification process hoops and delays---they'll just go to a competitor who isn't as strict.

So we could make the PII less valuable by not using for things that attract fraudsters.


Hell in this instance, just replacing non EOL equipment that had known vulnerabilities would have gone a long way. We're talking routing infrastructure with implants designed years ago, still vulnerable and shuffling data internally.


The "problem" is noone cares and certainly doesn't want to pay for the costs, especially the end users. That EOL equipment still works, there are next to no practical problems for the vast vast vast vast vast vast vast majority of people. You cannot convince them that this is a problem (for them) worth spending (their) money on.

Even during the best of times people simply do not give a fuck about privacy.

Honestly, if there is a problem at all I would say it's the uselessness of the Intelligence Community when actually posed with an espionage attack on our national security. FBI and CISA's response has been "Can't do; don't use." and I haven't heard a peep from the CIA or NSA.


Until companies are held liable for security failures they could have and should have prevented, there's no incentive for anyone to do anything. As long as the cost of replacing hardware, securing software, and hiring experienced professionals to manage everything is higher than the cost of suffering a data breach companies aren't going to do anything.

I've seen the same thing at previous jobs; I had a lot to do and knew a lot of security issues that could potentially cause us problems, but management wasn't willing to give me any more resources (like hiring someone else) despite increasing my workload and responsibilities for no extra pay. Surprise, one of our game's beta testers discovered a misconfigured firewall and default password and got access to one of our backend MySQL servers. Thankfully they reported it to us right away, but... geez.


>The "problem" is noone cares and certainly doesn't want to pay for the costs, especially the end users

Well I care. I’d pay a premium to a telco that prioritized security and privacy. But they all are terrible, hovering up data, selling it indiscriminately and not protecting it. If they all suck then the default is to use the cheapest.

It’s definitely why I use Apple devices because I can buy directly from Apple and they don’t allow carriers to install their “junkware”.


That EOL equipment probably shouldn't be EOL though. Part of the blame should go to equipment makers that didn't bother to send out updates to fix the vulnerability in still functional equipment.


Another issue is lack of education/training/awareness among developers.

A BS in CS has maybe one class on security, and then maybe employees have a yearly hour-long seminar on security to remind them to think about security. That isn't enough. And the security team and engineers that put the effort into learning more about security and privacy often aren't enough to guard against every possible problem.


But At&t and their 42,690 partners say they value my privacy :(


They do value your privacy! They just don’t like to share how many cents its worth to them


Apple seems to be willing to spend money on this kinda stuff. But the reason why they do this is because it allows them to differentiate their offering from the others, with privacy being part of the "luxury package", so to speak. That is - their incentive to do so is tied to it not being the norm.


Apple and Google care about this because they handle more customer data and require more customer trust than most companies.

People were shitting a brick over a pretty minor change in photo and location processing at Apple. That’s because they don’t screw up like this.


The point is that Apple specifically goes out of the way to avoid having customer data in the first place.

(Google, on the other hand, is the opposite.)

But, as far as I can tell, the only reason why Apple does this is because privacy these days can be sold as a premium, luxury feature.


I work in internal tools development, aka platform engineering, and this is interesting:

> That's part of the problem. But companies also are unwilling to pay to do any of the things that you've described. There is no punishment or fine that is actually punitive. Protecting (short term) profit is more important than protecting users' data --- it's even more important than protecting the (long term) profit potential of a company with a good reputation.

Frankly, any company that says they're a technology or software business should be building these kinds of systems. They can grab FOSS implementations and build on top or hire people who build these kinds of systems from the ground up. There's plenty of people in platform engineering in the US who could use those jobs. There's zero excuse other than that they don't want to spend the money to protect their customers data.


This is not a tools problem, this is incentive and political problem.

Telecoms will not get fined for this breach, or fined at amount that is meaningful, so they are not going to care.


I'm not sure why it's either/or to you. Seems to me like we're talking about the same problem but stated from two different perspectives.

Politics has historically incentivized job creation.


Because you came in acting like Internal Developer Platform would be fix to their problems when it won't be. In fact, I doubt the lack of IDP is their problem.

As SRE, I'm just over everyone running around acting like another tool is going to solve the problem. It's not, incentives need to be present not to be completely terrible at their job.

Also, I guess I should admit, I have strong aversion to IDPs. They always become some grue that eats me.


An IDP is not a secrets management tool or vice versa. IDPs are more like connectors of your internal tools/platforms. Their key metrics have more to do with toil reduction and velocity, but they can certainly solve the kinds of problems that lead to a company thinking they need a group of people focusing solely on reliability.

> Also, I guess I should admit, I have strong aversion to IDPs. They always become some grue that eats me.

I am a SRE. I stopped using that title professionally some time ago and started focusing on what makes companies reach for SRE when the skillset is the same as a platform engineer.

A post I wrote on the subject: https://ooo-yay.com/blog/posts/2024/you-probably-dont-need-s...


After Apple argued for years that a mandatory encryption-bypassing, privacy-bypassing backdoor for the government could be used by malicious entities, and the government insisting that it's all fine don't worry, now we're seeing those mandatory encryption-bypassing, privacy-bypassing backdoors for government being used by malicious entities and suddenly the FBI is suggesting everyone use end-to-end encryption apps because of the fiasco that they caused.

But don't worry, as soon as this catastrophe is over we'll be back to encryption is bad, security is bad, give us an easy way to get all your data or the bad guys win.


The story is a little longer than this. A bunch of folks from academia and industry have been fighting the inclusion of wiretapping mandates within encrypted communications systems. The fight goes back to the Clipper chip. These folks made the argument that something like Salt Typhoon was inevitable if key escrow systems were mandated. It was a very difficult claim to make at the time, because there wasn’t much precedent for it - electronic espionage was barely in its infancy at the time, and the idea that our information systems might be systematically picked open by sophisticated foreign actors was just some crazy idea that ivory tower eggheads cooked up.

I have to admire those pioneers for seeing this and being right about it. I also admire them for influencing companies like Apple (in some cases by working there and designing things like iMessage, which is basically PGP for texts.) It doesn’t fix a damn thing when it comes to the traditional telecom providers, but it does mean we now have backup systems that aren’t immediately owned.


Thats not exactly true. The FCC911 and other government laws require the telcos to have access to location data and record calls/texts for warrants. The problem is both regulatory as well as commercial. It is unrealistic to expect the general public nor the government to go with real privacy for mobile phones. People want LE/firefighters to respond when they call 911. Most people want organized crime and other egregious crimes to be caught/prosecuted, etc. etc.


Nonsense. I kindly informed my teenage niece of the fact all her communications on her phone should be considered public, and the nature of Lawful Interception, and the tradeoffs she was opted into for the sakenof Law Enforcement's convenience.

She was not amused or empathetic to their plight in the slightest. Population of at least 2 I guess.


Make that population of 3. I'm not a fan either. But I'm also realistic. I treat the phone as what it is: malicious spyware. But I realize that most people want the convenience and the safety (of sorts) of dialing 911 and getting the right dispatch..


If law enforcement actually did their jobs, this would be more understandable. I don’t know about you or others’ experiences, but when I’ve called the police to report a crime (e.g. someone casually smashing car windows at 3p in the afternoon and stealing anything that isn’t bolted down), they never show up and usually just tell me to file a police report which of course never gets actioned. Seems pretty obvious to me that weakening encryption/opsec to “let the good guys in” is total nonsense and that there are blatant ulterior motives at play. To be clear I’m a strong proponent of good security practices and end to end encryption


There's not nearly enough public information to discern whether or not this had anything to do with stored PII or lawful interception. All we know is that they geolocated subscribers.

The SS7 protocol provides the ability to determine which RNC/MMC a phone is paired with at any given time: it's fundamental to the nature of the functioning of the network. A sufficiently sophisticated adversary, with sufficient access to telephony hardware, could simply issue those protocol instructions to determine the location.


> and end-to-end encrypted their customers' data

Somewhat of a tangent: does anyone have any resources on designing/implementing E2E encryption for an app where users have shared "team" data? I understand the basics of how it works when there's just one user involved, but I'm hoping to learn more about how shared data scenarios (e.g. shared E2E group chats like Facebook Messsenger) are implemented.


Matrix has a detailed write up intended for client implementers: https://matrix.org/docs/matrix-concepts/end-to-end-encryptio...

It should give you some ideas on how it's done.


Thanks! I've added it to my reading list.


You want to search for the "double rachet protocol", which brings up articles like this[1].

[1] https://nfil.dev/coding/encryption/python/double-ratchet-exa...


Exactly the kind of thing I was looking for, thank you! And thanks for the tip about "double ratchet protocol," that helps a ton.


Intelligence agencies may use that data, but there are plenty of financial incentives to keep that data regardless. Mining user data is a big business.


The best solution to privacy is serious liability for losses of private customer data.

Leak or lose a customer's location tracking data? That'll be $10,000 per data point per customer please.

It would convert this stuff from an asset into a liability.


All of these claims for serious fines, yet no indication of where the fine is to be paid. Fines means the gov't is getting the money, yet the person whose data was lost still gets nothing. Why does the person that was actually harmed get nothing while the gov't who did nothing gets everything?


Even better - give the customer the $10k.


This exactly. Data ought to be viewed as fissile material. That is, potentially very powerful, but extremely risky to store for long periods. Imposing severe penalties is the only way to attain this, as the current slap on the wrist/offer ID theft/credit monitoring is an absurd slap in the face to consumers as we are inundated with new and better scams from better equipped scammers everyday.

The current state is clearly broken and unsustainable, but good luck getting any significant penalties through legislation with a far-right government.


Yeah, take an externality, make it priceable, and then "the market" and amoral corporations will start reacting.

Same principle as fines for hard-to-localize pollution.


Corporations' motivations rarely coincide with deep, consistent systems strategy, and largely operate reactively and in a manner where individuals get favorable performance reviews for adding profitable features or saving costs.


They are appropriately motivated in this case, carriers would surely rather have no idea whatsoever about the data they are carrying. The default incentive is they'd really rather avoid being part of any compliance regimes or law enforcement actions because that sort of thing is expensive, fiddly and carries a high risk of public outcry.

If they had the option the telecommunication companies would love to encrypt traffic and obscure it so much that they have no plausible way of figuring out what is going on. Then they can take customer money and throw their hands up in honest confusion when anyone wants them to moderate their customer's behaviour.

They don't because that would be super-illegal. The police and intelligence services demand that they snoop, log and avoid data-minimisation techniques. It is entirely a question of regulatory demand and time that these sort of breaches happen; if the US government demands the data then sooner or later the Chinese government will get a copy too. I assume that is a trade off the US government is happy to make.


While I agree, isn't this a degree of victim blaming? They were hacked by a state actor and every thread ignores the elephant in the room.


They had a backdoor. Someone used the backdoor. You stick your hand in a running lawnmower and it gets chopped off. Nobody is surprised.

Who put the backdoor there? The US government did.


No.

A telecommunications carrier may comply with CALEA in different ways:

    The carrier may develop its own compliance solution for its unique network.
    The carrier may purchase a compliance solution from vendors, including the manufacturers of the equipment it is using to provide service.
    The carrier may purchase a compliance solution from a trusted third party (TTP).
https://www.fcc.gov/calea


CALEA is a mandate from the U.S. Government to backdoor all telecom infrastructure for U.S. LE and intelligence purposes.


> But intelligence agencies insist on having access to innocent citizens' conversations.

Intelligence agencies also stockpile software vulnerabilities that they don't report to the vendor because they want to exploit the security flaw themselves.

We'll never have a secure internet when it's being constantly and systematically undermined.


Yes, but spies are going to spy, so we should focus on getting software built to have security by design and not just keep out-sourcing to the cheapest programmers that don't even know what a sql injection is.

Currently, with proprietary software, there's an incentive for companies to not even acknowledge bugs and it costs them money to fix issues, so they often rely on security through obscurity which is not much of a solution.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: