Hacker News new | past | comments | ask | show | jobs | submit login
AT&T says criminals stole phone records of 'nearly all' customers in data breach (techcrunch.com)
1103 points by impish9208 3 months ago | hide | past | favorite | 813 comments



AT&T has 110 million customers. Let's be optimistic and assume that each customer only has to spend one minute of extra time managing their account due to the break-in. That is more than 209 years of lost time.

Laws related to data breaches need to have much sharper teeth. Companies are going to do the bare minimum when it comes to securing data as long as breaches have almost no real consequences. Maybe pierce the corporate veil and criminally prosecute those whose negligence made this possible. Maybe have fines that are so massive that company leadership and stockholders face real consequences.


It surprises me that there isn't a single comment pointing out that corporations like AT&T don't collect all that data for fun. This actually costs them a lot of money, but they're legally required by the government. While everyone is blaming the company, did you not take a second and contemplate how weird it is that you're fine with the government (and now everyone else es well) getting a record of all your phone activity? I'm old, back in my youth we'd have referred to that as a dystopian surveillance state.


There's no federal law requiring AT&T to hold onto this data.

There's possibly a FISA court requirement (too secret to reveal), but AT&T has long been an exceedingly willing part of the gov's spying apparatus. It fed these records and Internet data to the feds without any court order, and only escaped legal troubles when Obama, contrary to his campaign promises, gave AT&T, Verizon and more retroactive immunity


I'm no longer under this specific NDA, so, I can talk a bit about this.

It was well known in the wireless industry that ATT collected and kept the most data on all of the carriers: 7 years for text metadata, "7 years" for call history (I put that in quotations because it was rumored that ATT kept them indefinitely, but, there were technical limitations for restoring data that far back), and 7 years for the contents of the text messages themselves. Verizon was up there as well, but, I don't remember specifics.

The carrier that I worked with kept only 3 days content of the actual messages, 28 days for the text message metadata, and 28 days for the call records for their enforcement database, but, they could get calling records and sms envelope information for billing back 7 years, and at the time, we had to implement sharding at the database layer that maintained the warrant database due to the amount of traffic that we were receiving from the calling systems and the amount of queries/data that we were sending out, in near realtime, to law enforcement users who paid $10,000/month for access to that data.

AT&T wasn't storing this data out of the kindness of their heart, it was a (probably small) revenue stream for them.


Ah, back in the day the FBI would pay our CTO $5000/hr to talk to and work with him. On top of that we would charge them a monthly colo fee for their equipment that collected data of customers.

Sometimes they had warrants, but mostly just bought the data.

A year or so after 9/11 and that relationship lasted years.


Welcome to the US - claimed "praiser" of freedom, but with no respect for privacy. Even the EU is better at maintaining privacy than the US.


the EU is much more aggressive at banning and censoring websites though. I can't recall the last time I ran into a website in the US that's blocked at the provider level (private moderation like e.g. Youtube is a different story). Maybe Tiktok is the most famous, but it's still around and available afaik. But in the EU, ran into "the government has decided this information is bad for you" all the time, with a nice notice from the internet provider. My hunch is that under various pretexts both societies will continue to drift towards more censorship and less privacy, perhaps with some temporary local differences.


It depends on the country, the EU doesn't have the same laws for Internet censorship. Still in most EU countries it is better than in the US: https://en.wikipedia.org/wiki/Internet_censorship_and_survei...


I've never encountered anything like that while over here.


Retention periods seem like a moot point if the government just slurps every piece of data anyway and stores it indefinitely


Not everyone in law enforcement gets to play with the NSA's toys though. Some actually have their warrant and subpoenas glanced at by a judge before it gets rubber stamped.


While being briefly "glanced at" by a judge is certainly better than nothing (or just already having the data like NSA), practically it just means law enforcement needs to adapt some generic boilerplate justification text to each request.


Thank you for sharing this, it is helpful context when discussing data security and privacy with regulators and federal Congressional reps.


They keep personal customer details like SSNs indefinitely despite no longer being a customer.


They added windows to this now, but I always wondered what this windowless skyscraper was, back in the day, in Downtown NYC.

https://nymag.com/intelligencer/2016/11/new-yorks-nsa-listen...


That’s the AT&T Long Lines Building. It probably did have an NSA surveillance closet, but it wasn’t built without windows for that reason. The story I was told (by older colleagues when I worked at AT&T Labs) was that it was built during a time when riots and street violence were more common, so the fortress appearance was to ensure the city could maintain long-distance connectivity during urban unrest.

I believe there was another similar nexus downtown near the World Trade Center, which was destroyed on 9/11. For at least a couple of weeks we had very limited communications and credit cards were hard to use as a result.


It’s built to withstand a nuclear blast. There’s buildings like this all over the country (though not in skyscraper format).


Perhaps, but the other version would explain the "nuclear-war-proof" thing.

I am sure the employees were told SOME kind of legend, because that building begs questions.


There was a lot of nuclear war planning around those from the 50s through the 80s.

There's some good sites out there that go into detail like http://coldwar-c4i.net/


A tall above-ground building with no windows doesn’t seem like a good candidate to survive a nuclear blast.


Long lines buildings were not going to take a direct nuclear hit, but were very robust to handle shockwaves and EMP.

I came very close to buying a long lines microwave relay site, and got to tour it a few times. It had a hardened tower, as well as copper grounding that went deep into the ground. Mining the copper would have paid for the site, but alas.

These buildings were built based on the 1950s threat of Soviet bombers attacking the United States. The New York City metro area was protected by air defense missile sites and interceptors. The air defense systems would air burst small nukes in wartime to destroy bomber formations.

Once the threat shifted to ICBMs in the 1970s hardening was moot.


Yup, an underground structure would normally be a better design. But that would quickly get flooded with water in Manhattan in the event of a nuclear blast followed by loss of power.


Americans like to complain about the GDPR, but it exists to prevent exactly this sort of thing. Data cannot be retained longer than it's actually needed or required by law, and can't be sold without explicit permission. Law enforcement can't just buy data: they need to have legal authority to get it (though in many countries the bar for that is too low). In most cases the cheapest and easiest approach is to collect as little data as possible, and to delete it as soon as it's not strictly needed. This greatly reduces the compliance burden.


You obviously did not follow the recent drama in the EU related to Chat Control V2.

The EU wants LEOs to have access to the contents of your messages/emails/metadata and keeps extending the Chat Control V1 law in order to not have to delete the data that it already has.

You may not be able to buy that data outright but it will be out there and collected by the messaging providers on behalf of the EU.

It even had a data retention law that forced providers to keep up to 8 years of data related to their customers so that it could be handed over to LEOs.

The EU's stance on privacy is just lipstick on a pig. When you pick under the curtain of the privacy laws in the EU, you'll see that it's not better here than in the US.


> You obviously did not follow the recent drama in the EU related to Chat Control V2.

It is strange to say they wanted it when we have proof it is voted down and widely unsupported. A part of the EU government apparatus wants it, but taking that and saying the EU wants it is not honest.


The regular Joe doesn't really care to be honest.

I have talked about it around me a bit and most people who do not work in tech or who don't have a certain interest in online privacy or privacy in general don't know about it.

Of course when you ask the citizens of the EU if they are cool about being monitored at all times by the EU LEOs then they don't want it but the commission wants it bad. All this is due from the heavy lobbying that has been happening in Brussels.

The worst part is that this is happening while the EU is saying that it wants data sovereignty, and wants to become less dependent on the software coming from the US, but it's ready to get in bed with a US company in order to deploy this mass surveillance system who supposedly is very good at finding CP.

Nevermind the fact that it means that every bit of online communication will be analyzed and dissected by a corporation that is out of reach of the EU.

But the commission is not stupid, they carved themselves a nice little clause so that they can be exempted from such mass surveillance. I guess they understand that having all telecommunications monitored by a for profit company that is not from the EU could lead to some embarrassing data leaks, just like we saw with AT&T but they don;t care if it's our data that leaks as long as it's not theirs.

That is why to me GDPR is just a facade. You can't seriously say that you are pro privacy and pro democracy if you keep trying to recreate the Stasi on a larger scale.


CP is just a pretext to keep records on everyone. Good thing everyone over 40 in Eastern Europe still remembers the Stasi and its sister secret police agencies that collected data on everyone and tortured political prisoners. I suspect that climate activists are the next likely candidates for an eventual repression apparatus, so better beware.


Portugal and Spain also aren't found of their politicians from 50 years ago (their regimes fell in 1974, and 1975, respectively). To add to your point.


The fact that it had to be voted in the first place, and then represented again within six months is the problem.


I was talking about the GDPR, not EU regulations in general.


How does it look on one hand to say that the EU cares about it's users data and wants the users to be able to choose who it is shared with, has clear guidelines related to it's storage and levy fines on companies who breach these terms and then turn around and come out with Chat Control V2?

Something does not compute. Either you are pro privacy and you act like it or you are not.

It kills me to hear that Europe is pro privacy, because it is not true. Not if you look under the veneer and start peeling back the layers.

These sorts of data breaches should be a wake up call for any state actors who are planning on collecting massive amounts of data on their citizens.

It should make them pause and say, you know maybe we should not just give away all our data to Russia or China if they manage to break in our system.

Maybe the best way to avoid such data breaches is to not store the data in the first place.


You're arguing with a lot of things that I didn't say. My comment was entirely about the GDPR.


The US also has laws that, in isolation, would suggest some sort of protection against universal corporate/government surveillance, but they’re no more effective here than in the EU.


At first I read this as GDR


Do Americans complain about the GDPR? I’ve only ever seen them say they wish the US had something similar.


American businesses, especially in predatory industries like adtech, complain all the time.


I would hardly roll that up to all Americans though. Of course companies who's business model is seriously hurt by GDPR would complain.

Most Americans wouldn't even know what GDPR is, let alone have a reason to complain about it.


They are talking about Americans on this site, who very often work at companies that GDPR is made to stop predating on users. Many European users here also works at such companies, so you often see it from them as well, but not as often since those companies are mostly American.


Ah got it, I totally missed that context here somehow. I hadn't noticed a habit of Americans here complaining about GDPR, but that's interesting given another common pattern here of libertarian ideas. An American complaining about a different countries internal policies doesn't seem particularly libertarian.


Yes, mostly blaming them for cookie banners (which aren't because of the GDPR) but also because it makes them need to think about compliance.


"but the cookie banners look so bad and ugly!"

Well, that's kinda the point, but way too many website owners rather torture their users with barely compliant implementations than do what the GDPR intended: get rid of third parties.


> way too many website owners rather torture their users

including official EU websites


Which usually have an

[ACCEPT] [REJECT]

without any dark patterns whatsoever.


Also cookie banners are from the e-privacy directive, not the GDPR.


I'm positive informed consent doesn't require cookie banners, but the advertisers opted to make it as annoying as possible so that everyone would click "accept" just to be left alone. It could be a browser mechanism that only asks once for all sites and have a whitelist.


Let's not pretend that the GDPR fixes this in any way. There are still EU data retention laws in place which force ISPs/carriers/... to store all kinds of data for a reasonably long time.

I don't know who Europe's biggest telco is, but if they got breached, the damage would be just as bad.


> There's no federal law requiring AT&T to hold onto this data.

This is false? https://www.law.cornell.edu/uscode/text/18/2703 https://www.usnews.com/news/articles/2015/05/22/how-long-cel...


There's required disclosure using an administrative subpoena for records over 180 days old if they have them

CALEA requires phone (and later broadband) equipment to conform to wiretapping standards, and if a carrier gets a court order to wiretap it has to provide that data from warrant receipt til warrant expiration.

Landlines have some data retention requirements.

But there's no law on broadband or wireless data retention.

There may well and likely is a secret FISA court order under section 702 that's been served to telecoms, but an astonishingly small number of people in govt and industry know whether that actually says that they just have to hand over records in real time or whether they need to keep records for some period of time.


That’s interesting, I did not know this about the Obama govt. Do you have a good article about this? (Yes I’m lazy I could search for this)



That’s was Bush not Obama


Being required to do something doesn't justify doing it poorly. AT&T brought in over $3 billion with a B of profit with a P in Q1 2024. They have more than enough money to secure their systems. They're not struggling. In March of this year they bought back 157M of their stock. They could have instead put that money towards security, but they didn't: they put it towards enriching shareholders.


Money can't buy competence, at least not at organizational scale.


Sure, but “incentivise a business to do something, and they’re more likely to do it” is still true.


Fine, but they can clearly afford to pay for a lack of it.


Fine is cheaper than solving compliance issues. Many such cases unfortunately.


Maybe not for execs, but if not for money you literally couldn't hire competent security folks


And whom is a large percentage of shareholders?


It was snowflake’s lack of security that did this not ATT. Not saying ATT is a paragon of security or anything but snowflake was where the hack took place.


A vendor’s security is the clients security. Companies might choose a vendor for CYA in these instances, but if someone decides to send all of their internal business data to a third party, they better have a pretty good idea what will happen if that third party fails.


Snowflake has the same shared-responsibility structure as any other cloud provider: they provide enforcement but you are responsible for setting up and protecting your own credentials and permissions. They can’t impose “security” unilaterally in the abstract.


What do you know about Snowflake's role in this? According to the article, Snowflake says that they offered 2FA and AT&T didn't use it.

Perhaps that's not the whole story, but if true then blame certainly lies with AT&T to a significant degree.


It’s mostly AT&Ts fault but it’s sort of a side effect of Snowflake making their product easy to use and most of the industry overlooking credential reuse risks.

Databases are not historically internet facing so data compromise also meant getting network access. But Snowflake provided web access to your database so they were “easy to use” database as a service (“cloud data warehouse”). Snowflake did not offer you a way to host data within your network or within your dedicated subnets within a cloud provider, so companies could not solely rely on those networking barriers to limit malicious counterparties.

Snowflake has apparently begun requiring MFA for new accounts since this incident I’ve heard. If shutting the gate after the horses have left implies culpability, Snowflake has some.


Part of the job of the contractor is taking responsibility with who they take security from. To take it to the logical extreme if 'some rando they met in a bar' offered to store AT&T's credit card information for cheap and it turns out said rando was stealing credit card information? Totally AT&Ts fault for not properly vetting them.


the number 1 job of a company is to enrich shareholders.


Enriching shareholders is exactly what they are required to do.

What, nobody is allowed to make money anymore?


Sure, and then it's the government's job to ensure the shareholders lose their money when the company loses a hundred million customers' records. So yeah, it turns out that when you pay yourself instead of doing right by your customers, I think you shouldn't be allowed to make a profit.


No, they shouldn't be allowed to fuck over their customers at ever turn so they can be greedy. The suggestion that we should be more worried about how much money the AT&T execs and shareholders make over their needs of their 100 million customers is bizarre.


Those are not mutually exclusive.


Banks are required to maintain financial transaction records.

Is the argument that governments don't have a good reason to mandate record collection?

Why can't I ask my government to keep me safe from terrorists but also expect that companies will not just be careless with the data they collect as part of that?


Government has no right to track that either, they themselves launder trillions, start wars and massacre millions, even a drug lord is a petty criminal compared to them, and it's clear their tracking of any and all records of any type is more about control than safety, thus it should be disregarded as an argument and be done away with entirely.


> they themselves launder trillions, start wars and massacre millions, even a drug lord is a petty criminal compared to them

And then people wonder why privacy has a difficult time getting public support.


No, we already know it is because people are complete idiots who not only fall for 'tiger repelling rocks' but actively demand them.


The government can't keep its own data safe, as the OPM breach showed. Apart from some resignations, nobody faced any serious consequences for that either.


Even more reason for regulatory requirements covering data security for all organisations- both private and public sector


Many (all?) banks keep financial transaction records for way longer than what is legally required. Thankfully, most banks are technically incompetent and are unable to easily use data that is not relatively recent. In fact, one bank I worked for had to load transactions from a CD-ROM archive which contained all the transactions in a printable text format (the same format as their printed bank statements). Multiple CDs per day, with no indexing or identification beyond the date. Trying to find a specific 10 year old transaction was very hard work indeed.


I agree. I think it's reasonable to expect companies to safeguard that information from malicious actors.


I don't agree. I don't think it's reasonable to expect it, because companies show over and over that they cannot do it. And let's face it, the only reason your company hasn't fallen victim to a data breach or ransomware is that you haven't been seriously targeted yet.

We need to change our approach. We need to look at why these kinds of data are valuable, and then make them not valuable. Then nobody will bother with hacking to get it.


This data is valuable primarily for spam mitigation and perhaps customer profiling.

Expect every SMS and MMS sent or received to be part of a spam mitigation and profiling program where it's stored indefinitely.

Apple not encrypting RCS is likely due to similar factors, where they have seen existing spam problems on RCS that are much harder to root out when you have end-to-end encryption.


In my not so humble opinion, the biggest problem with phone numbers in general is the general ability to spoof any number. Please correct me if I am wrong but stir/shaken is only available on the new stuff and even then there is no good way to track the origin of a phone call. This is beyond ridiculous and clearly leadership is asleep at the wheel.

There needs to be a firm timeline -- maybe a year maybe a decade, I don't know the details but something that allows customers to transition to a system where all calls can be traced through the network with 100% guarantee.

Step zero is actually having a process/protocol where any phone is tamper evident meaning we can tell 100% that this call came from this operator and the operator knows the call came from this user.

Perhaps the first phase allows individual users to opt in. So we would ask our operators to only route us calls and texts that positively identify themselves as fully traced with whatever the new protocol is that will replace SS7/sigtran so the origin of a call or text is positively identified. If this guarantee is not available, route the call to spam inbox somehow.

Then the hard part I'm guessing is fixing all the defects?

The second phase is to say after this date, no operator in the US is allowed to relay calls that are from legacy systems. This will likely take many years as I don't know how we will handle international calls and texts. But at some point we have to put our foot down and say enough is enough.


> Step zero is actually having a process/protocol where any phone is tamper evident meaning we can tell 100% that this call came from this operator and the operator knows the call came from this user.

This basically doesn't work because the mapping between phone numbers, users and operators isn't exactly 1:1:1.

Some businesses have a single number that they use as Caller ID on all their calls , despite having one corporate HQ in New York, one branch in New Orleans and one customer support callcenter in New Delhi. All of these use different carriers and are based in different countries, yet they're all legally authorized to use that number.

If you want to read more about why this is such a hard problem to solve, see https://computer.rip/2023-08-07-STIRred-AND-SHAKEN.html


> ...yet they're all legally authorized to use that number.

But why? I get that they want a unifed appearance, but as a phone subscriber I want to know if it's BigCo calling from New Delhi vs. BigCo calling from Chicago.


Amazing article about why phone spam is so much harder to fight than email spam.

Thank you for sharing it!

Now I need to lean SS7 signaling.


Finally, some sense. My first though when reading the article was why are we even allowing these companies to collect that data in the first place.


How would they bill customers and other providers for usage if they didn't keep call/text metadata?


These are records from 2022. The hack wasn't carried out the second the calls were made. You really need to keep the records that long to do your billing? That's absurd.


I don't think it is. I assume everyone gets hacked eventually. It's really hard (I would argue impossible) to make a 100% secure computer system, and if they're operated by people, you're terribly vulnerable.


You are more likely striken by lightning than coming in contact with terrorism whatsoever


Pish posh. They also sell that data at an increidble markup – and without the knowledge of their customers – to anyone who'll pay, including governments and their cutouts.


Why they hold it and how they protect it are valuable conversations. But their customers deserve something akin to security regardless of the why.


Spam mitigation and management is a huge bugaboo in wireless networks today.

The big three wireless carriers in the USA today formed a cartel called The Campaign Registry that seeks out TINs/EINs and the SSNs of the owners of Sole Proprietorships and LLCs as part of a lengthy approval process to be allowed to send texts.

It's a great extra judicial rent seeking machine that bans any SHAFT content (sex hate alcohol, tobacco, firearms and anything tangentially related) along with hefty fines for anyone that they feel has crossed said boundaries.

Letting the morality police run amok on our Telecom networks here in the USA is happening, and they also want all the data they can get along with bribes from businesses.

Ajit Pai created the opening for this mess, and the current FCC has done nothing to clean this up (though given recent SCOTUS rulings, who knows if they ever had the authority...)


Tangent, but it's ridiculous that sex is in the same group of undesirables such as firearms, alcohol, tobacco and hate.


That T-Mobile is out here slapping spam mitigation blocks on phone numbers who received SHAFT content from numbers on T-Mobile's network is pretty ridiculous, but silently blocking and providing no appeal or escalation path is just how we let companies operate these days.


I've never heard of this, and cursory web searches don't seem to be turning up anything relevant (although that's admittedly not saying much with the state of search lately). Can you explain how the law requires this level of data retention?


Apparently they'd uploaded their customer data into something called Snowflake to do some kind of analysis on it, but it wasn't particularly well secured. They haven't said why they were analysing the data, but there's no indication that it had anything to do with government demands.


"legally required by the government" to keep securely. If you can't keep to the rules don't play the game. I'm sure any other telecom would be glad to get the market share.


That's a good point. Had they valued the citizens' privacy they would have done the opposite, that is make it illegal for network providers to store customer data that is not essential for them providing the services. But I guess creating a dystopian surveillance state is more of a priority.


Sure - pretty well every corporation you purchase a service from is required to store your credit card information as well. But there are stiff penalties from the government and credit card processors for unauthorized access to that information; consequently, it's rarely stolen.

Your address, cell metadata, phone number, email address, and passwords are leaked pretty well contsantly though.

It's not that corporations are incompetent. The laws and regulations mean it's not worth the cost to treat your personal information with any real respect.


> store your credit card information ... but there are stiff penalties from the government and credit card processors for unauthorized access to that information; consequently, it's rarely stolen

Citation: The Onion?

The Payment Card Industry Data Security Standard (PCI DSS) is the main information security standard for organizations that process credit or debit card information must abide by. The guidelines established in PCI DSS cover how to secure data handling processes.

So here are the top 5 info breaches:

https://www.goanywhere.com/blog/the-5-biggest-pci-compliance...

To be fair, if what happened to Heartland happened more often, PCI compliance would be taken more seriously, and breached less often.


I'm not saying it doesn't happen. Credit card data is too valuable to never be stolen. I am saying that ~37 to >500 is a hell of a difference in how frequently things are stolen [0]

You pointed out how there are guidelines for holding that information, I'm saying there are consequences [1]. I'm following that up by saying that the consequences for mishandling customer information are not nearly as severe. They do not result in 6 figure fines.

I'm saying the severe consequences to mishandling CC data have led to the incredible disparity shown in the first paragraph

[0] https://haveibeenpwned.com/PwnedWebsites

[1] https://resourcehub.bakermckenzie.com/en/resources/global-da...


Most places don't actually store or process anybody's credit card information any more, all they have is a Stripe token, which is completely useless to a hacker.


The government isn’t distributing my data to everyone else (so far). For profit companies have a pretty massive list of breaches so far.


You are forced to give your personal data to government. You don't have to give your data to any company. That's huge difference.


Only if you cut all ties with civil society and live solitary.


Only dead fishes flow with the stram.


do yourself a favor and accept that phone records have never not been recorded and the data is mostly available for purchase. the company is to blame because they are complicit or negligent in the bespoke surveillance state, probably both.


welcome to a post 9/11 world. privacy has been dying for a long time. the general population doesn't care anymore. they freely give up everything to big tech anyways.


> how weird it is that you're fine with the government getting a record of all your phone activity

I don't like it, but accept it as the lesser evil. I'm from Europe and I believe the number of reported prevented terror attacks. The agencies need data access for that. Not good, but necessary.

But are you aware that Meta, Google, Apple, MS, etc. collect every kind of information about every user of Android, iPhone or WhatsApp, Insta, Facebook, Windows? Phone manufacturer, huge apps like TicToc as well. The kind and size of that data is crazy beyond imagination. I don't care if the government can get access to my WhatsApp messages when some of the most irresponsible companies, collect and use everything to their advantage. Are you really that naive and think that Meta doesn't analyse their gigantic data lake including billions of WhatsApp messages to predict the results of elections? That is the real danger to democracy.


> I don't care if the government can get access to my WhatsApp messages when some of the most irresponsible companies, collect and use everything to their advantage.

This is all voluntary. You give those companies your data. You don't have to. I use grapheneos and do not use any of those socials, for example.


The problem comes as people start shoving more and more DRM around, whether it be Google Play Protect, the new Android WebView Media Integrity API, or an eventual reboot of the Web Environment Integrity proposal.


I understand, but I also won't participate in those and will actively work to undermine them.


Hurting the shareholder is the only option to actually fix anything. Until the C-suite and board are forced to face the music caused by rich people being parted from their money, they'll just continue patting themselves on the back and giving themselves bonuses.


If bankruptcy can clear liabilities then your suggestion won't help. The shareholders are usually gone by the time the bill comes due: it's often cheaper to go bankrupt. And there's a whole private equity industry revolving around taking dirty liabilities and slowly bankrupting a company to squeeze the last dollar out before shutting down.

Look at the same problem with environmental disasters that were created by corporations. The problem with security liabilities is similar? Externalities are hard to get shareholders to pay for.


You don't need to try to seek value from the shareholders in a bankruptcy to hurt them. (Doing so would be going against rule of law and as for changing the law, well do you hear that giant sucking sound of funds fleeing your economy?) Just having their holding's value go to zero is sufficient.


Maybe irrelevant for security flaws, but the point is that externalities can easily exceed market capitalisation. Trading while insolvent is illegal, but that is hard to judge with liabilities. Examples are Johnson&Johnson (public) and Purdue Pharma (private).


The C-suite and board are not the shareholders.

The shareholders are mostly the pension funds that will eventually pay your money and the banks that already do.


I agree.

Shareholders can vote and decide the direction of a company. They should also be held liable for any problems the company causes.

If the company is fined it should come out of company and then shareholder pockets. I might even add courts should be able to award damages by directly fining share holders.

If a company does something severely illegal then very large shareholders should risk jail time.

It’s your company after all as a shareholder. You own it.

It’s no different if your dog bites someone or child breaks the law. You have to pay the fines.


Under that twisted logic Israel would be perfectly justified with nuking Palestine. They voted for terrorists, therefore they should be liable for everything their country caused.


The people “whose negligence made this possible” are probably just rank-and-file employees. Careful what you wish for. I know I sure wouldn’t want to be legally liable if my software were vulnerable to something I didn’t know about.

Maybe a reasonable first step is third-party standards, audits, and certifications around data security to make privacy- and security-conscious consumers aware of what a company is doing. If consumers really find value in that, then they will preferentially deal with that company, and other companies will follow suit.


> The people “whose negligence made this possible” are probably just rank-and-file employees. Careful what you wish for. I know I sure wouldn’t want to be legally liable if my software were vulnerable to something I didn’t know about.

This isn't what's being suggested.

Higher ups set the incentive structures that result in dwindling security resources.

If their ass is on the line, they will actually listen to the developers and security experts telling them they are vulnerable, instead of brushing them off to divert resources that boost the reports which determine their bonuses.


I understand that isn’t what’s being suggested. What I’m suggesting is that there is perhaps a distortion of the common idea of who is “responsible” for something. I think the idea that fault bubbles up to the highest level in the chain of command is silly. Fault is distributed across the entire chain, and if we want to address this issue, we can’t ignore that.

To draw an analogy, if someone’s 16-year-old child is texting while driving and gets in a car accident, is their parent to blame? Most people could see that there is some fault on the part of both the parent (for perhaps not emphasizing enough the importance of safety while driving), and the child (for doing something they know is unsafe). And this fault exists in a continuum; maybe the parent told their child every day to not text while driving, and the child did it anyway. Maybe the parent never told them anything about safe driving habits, so the child had never considered that texting while driving was unsafe.

My point is that pretending that the highest C-suite executive is wholly responsible for everything that goes on in the company is extreme. Everyone along the entire chain of command has to do their part to ensure secure products are shipped - the executive needs to prioritize it, hire the right people to develop a plan, ensure people are enforcing the plan, etc., all the way down to the software engineers, the cleaning staff, etc. If one link in that chain breaks, the entire system fails, and it could be because of a weakness anywhere along the chain.


I agree with your view completely. There is nuance, and there should often be blame at multiple levels. At the same time, there is a basis for the common view, which is that higher ups create the incentive structures from which most things flow. If it turns out the incentives here were well made by the brass, I'd retract my jumped-to conclusion. But it rarely turns out that way, which is why I jumped to it.


> Higher ups set the incentive structures that result in dwindling security resources.

What if this isn't the problem at all? What if a company invests a huge amount in data security, but still gets owned? That happens all the time.

I don't understand why people leap to the conclusion that these events are inevitably the outcome of neglect.

> If their ass is on the line, they will actually listen to the developers and security experts telling them they are vulnerable, instead of brushing them off to divert resources that boost the reports which determine their bonuses.

Again, why are you making this assumption? But let's say, for the sake of argument, that you're right. Now we go implement some draconian, top-down "you must be secure or the C-suite goes to jail" mandate. Corporations, out of fear of liability and prosecution, lock up tight, and refuse any and all changes that might undermine their security posture. Nobody builds anything new, because why take a risk?

Expensive "security expert" consultants start appearing out of nowhere to help with "compliance" with the new rule, and companies pay for them -- because it provides a veil of responsibility for the company, even if the consultant is useless. Worse, a certain percentage of these "experts" will be hucksters (or more likely: morons) themselves, and will always tell people that "they are vulnerable", because that essentially ensures a payday. You can't prove that a system is "secure", so who can say otherwise?

If you doubt that any of this is plausible, I suggest you take a hard look at our existing top-down security rules (e.g. ISO 27000, HIPAA, GDPR, PCI DSS, NIST SP 800-88 and SOC2, just to name a few) and the bureaucratic industrial complex that has erupted around them, and ask yourself it these things actually make you safer. I guarantee that AT&T was "compliant" by any conventional IT standard with these, employed an army of IT staff to document said compliance, and otherwise invested a huge amount of money in that kind of performative nonsense. Because that's what every company does.

But they still got owned.


If one breach exposed all of their data, they don't practice the well-known security (since ancient times) technique of never having all your goodies in one location.


The attack vector was an exposed Snowflake instance.

Snowflake's entire business model is based on selling the idea of "data lakes", "data warehouses", etc...

The basic premise of data lakes, etc, is to replicate and dump all your company data into easily queryable database instances, like Snowflake. I'm not disagreeing that this is a stupid thing to do, but just pointing out that this is something basically every Fortune 500 company is doing. Because big data is cool. (Or was cool)

Specifically since the article called out no 2fa... I'm actually very surprised how difficult 2fa is to set up with Snowflake. It's been 2-3 years since I set up a Snowflake instance, but I remember there being no obvious or easy way to enable it. (I wanted it on, but at the time enabling it was a multi-hour task, not just a setting to enable)


One password fail should never expose everything.

2fa is not the answer. The answer is compartmentalization. Just like a battleship is divided into many watertight compartments, because someone will poke a hole in it.

The Titanic needed 6 compartments to be breached before it was in danger of sinking.


Yeah, security checkboxes don't necessarily result in good security. One option is to still make companies liable for security breaches, regardless of what meaningless checkboxes they may have checked, and then trust that they'll figure it out. Real liability would shift things from theater to weighing actual risks and costs.

Another option is we can empower red teams (security researchers) to test the security of all systems even without permission, so long as they report their findings responsibly.

It's currently quite convenient for companies. They get to deny security researchers from testing their security, and they also have no liability if a security breach does happen. Or, to make it personal, if I want to investigate the security of a company by trying to hack their system, I risk going to jail, but if they lose my data in a breach I have no recompense.


I'm saying that's the same thing. It's probably worse, actually, because imagine yourself at the head of a company the size of AT&T. What would you do -- what could you do? -- that would ensure that some random employee would never do something that makes you vulnerable to attack? How terrified would you be?

It's impossible to ensure what you're asking for. That's the problem with all of these kinds of rules, but worse, because at least something like SOC2 is providing a safe haven if you do the right things. Making companies "liable" for breaches is tantamount to saying that companies will never develop software again, because the risk is simply too great. Certainly, if I were in that kind of a situation, I'd rarely use a third-party service, and never use a startup, or a smaller company. I can't be responsible for the risks of AT&T, and every software company AT&T uses. That's crazy!

We're going to have to come to terms with the fact that "security" is a verb, not a noun, and that data leaks are going to happen, even in the best secured institutions. Punitive rules might improve security in the marginal case, but only at huge costs industry wide.


If a company the size of AT&T finds themselves unable to move or do anything without creating security vulnerabilities, then it's time for the company to stagnate and go out of business, leaving fertile ground for more competent companies to replace them.

It would be kind of nice if companies would say "we've grown to our level of competence, we cannot safely do more, so we will keep doing the same, no more, no less, and make sure we do it well, and we will allow innovation to come from other companies". Instead, they say "let's recklessly chase every fad and who cares about poor security, it's not our liability".


Yeah, that's some nice rhetoric, but...I guarantee that, right now, some part of your personal software stack has a security vulnerability. If you write software for a living, some piece of software you maintain has a critical vulnerability.

Do you want to be held personally responsible when they're breached? If your wireless access point is hacked because you waited too long to update it, and it is used to launch DoS attacks, do you want to be liable? Do you want to be held personally responsible when you click on the just-good-enough phishing attack in your corporate inbox?

If not, then consider why you'd ask the same thing from a corporation of tens of thousands of people.


> Do you want to be held personally responsible?

No, I don't. I don't want anyone to be held personally responsible.

> consider why you'd ask the same thing from a corporation

I'm not asking the same from companies. I don't consider putting liability on a company the same as putting liability on an individual, and neither do our laws. Companies may pay liabilities out of profits, companies may have to sell assets, companies may go out of business and people lose their jobs. None of that is the same as someone being personally liable.


> If your wireless access point is hacked because you waited too long to update it, and it is used to launch DoS attacks, do you want to be liable? Do you want to be held personally responsible when you click on the just-good-enough phishing attack in your corporate inbox?

This is a strawman, corporations are suppose to have a process in place to make sure stuff is up to date. You don’t jail like a random rank and file guy for a huge breach.


> Making companies "liable" for breaches is tantamount to saying that companies will never develop software again, because the risk is simply too great.

Making humans liable for car crashes is tantamount to saying that humans will never drive again, because the risk is simply too great.

Replace with any complex activity - nuclear reactor development, aircraft, etc.

How is it that in your head data breaches are this special human activity where Boone should ever be held accountable?


> I don't understand why people leap to the conclusion that these events are inevitably the outcome of neglect.

Because that’s what happens 90%. Of the time.

In most cases I’ve seen, there are zero people on the team who could describe themselves as having any kind of expertise in security. Developers explicitly know about at least several vulnerabilities, but management doesn’t care to allocate resources to fix them, etc. that’s what’s happening in most shops.


This reminds me of the story where someone accidentally deletes the database and there are no backups. Who's at fault? The individual IT employee who made a mistake, or the entire organization (especially leaders) who created a situation where one person could delete the database and there are no backups?


There is a whole field devoted to this called governance.


You can safely assume that every company or org which fell to ransomware campaigns didn't have proper backups. Because such a restore wouldn't be in the news as serious outage.

The percentage of no backups seem to be crazy. I only read about the Central bank of Sambia being able to restore from backups, everyone else was down. All these responsible should be fired.


I’m baffled that anyone is even asking the question..

Anyone reading this, if you are of the “well the employee whole typed the command is to blame!” opinion, could you please reply to this comment? I need to know what you think the purpose of a hierarchy is in the workplace.

..needless to say, responsibility for your direct reports is yours. If they fuck up, you fucked up. You have the choice to hire and fire at will. You choose who has access to take chances. You own the wins and the losses. If you’re a good leader you redistribute the wins and dissolve the losses. It’s the entire job.

It’s 2024. There are no kings or dictators in the workplace.


It's a rhetorical question that's effective because the answer is obvious.


You would think so, but one time an undergraduate IT guy in my school's computer lab essentially ran an `rm -rf` on all the students' home directories 2 weeks from the end of the semester. It turns out the lab's backups weren't working. The email from the department was pretty quick to throw that kid under the bus.


Are you trying to say that a university IT department was a toxic workplace? I'm shocked, shocked I tell you!


My read of responsible people are corporate officers and executives--people who actually choose what to work on and are substantially rewarded by the corporation.


1. Absolute carelessness of customer data.

2. Nothing to no consequence to the executives.

3. Lawlessness of such events. Very poor consumer protection laws in this country.

4. Cybersecurity illiterate leadership making cybersecurity decisions.

5. Investing absolute little in Cybersecurity to meet bare-minimum standards.

6. Or all of the above?


this is already an established principle in other engineering fields. If a civil engineer screws up and a building collapses, both that engineer and the engineering firm are liable.

Why should the software industry be any different?


When I was working in a (non-software) engineering role, when I raised a technical concern it was taken seriously. As a software engineer, when I raise a technical concern it is brushed off and it I push it then my job is at risk.


because software developers aren't engineers? -- elephant in the room.


Some call themselves Hackers, they love to bypass processes.

And some call themselves code monkeys, they know how to follow orders, but have no incentives at all to think by themselves for proper security.

Only a tiny fraction call themselves engineers.

I favor non-licensed free professions, but if you're free you should be able to follow best practices and be able to think for yourself.


huh thats strange because I have a BSE and graduated from engineering school. Sure the history major bootcamp grads arent real engineers and we need to weed them out of the industry but there are some of us who are actually real engineers


I think the issue isn't so much the programmers who aren't engineers as it is the managers who don't treat programmers like engineers.


AT&T bought back a ton of shares of its own stock in March. It's likely that shareholders won't feel the effect of this security breach because of those buybacks (over a medium term time window).

How about instead of even more meaningless standards without teeth that don't affect the people pushing for profits over essentials like security, regulators impose punishments that actually affect the investors that ultimately create these perverse incentives in the first place? Nobody should be profiting off of a company that does wrong by over a hundred million people.


Direct liability to the front line / middle management which is cleared in exchange for defined levels of cooperation with criminal, regulatory, and civil investigations aimed at landing higher-ups would be a useful development.


Nonsense. The people who should hold responsibility are the people who have decision-making power and derive financial benefit from these choices. A rank-and-file employee is a scapegoat given the incentives at play in the system, even if they nominally wrote the vulnerable code


No, the people whose name is attached to budget decisions and higher level company direction that leads to this are the ones who are responsible.


The law that would have prevented this breach would be to make it illegal for telcos to sell customer data. The reason AT&T was feeding ALL the data to Snowflake was to sell their customer's location and social graph to marketers. It is unconscionable to me that this in not currently the law.


Do you have a source for that claim?



Thanks!


Here's Snowflake bragging about helping telcos sell location data: https://www.snowflake.com/blog/telecom-data-partnerships/


So if I buy a car with an advertised top speed of 200 mph, it's given that I must be violating speed limits when driving it?


Imagine a world where suffering a data breach meant you could no longer collect, let alone hold or sell that class of data for a decade, and this rule preempted laws that required data gathering.

AT&T would be nearly equivalent to an E2E service overnight.

The lines wouldn’t be encrypted, so the NSA would still tap them, but at least there would be zero mutable storage in the AT&T data centers (except boot drives, SMS message queues, and a mapping between authorized sims and phone numbers).

In this day and age, why do they even maintain call records? They don’t need them for billing purposes, which was the original purpose of keeping them.


Genuinely chonky fines seems to be the answer to this problem, as it aligns incentives with rewards/penalties (if you’re lax about how your company approaches user data then you’ll be at financial risk).

Piercing the veil to prosecute those “responsible” seems like it would just incentivise the business to carry on as normal but with employees that are contractually designated (i.e. forced) to be fall guys if anything goes wrong.


If PG&E has taught us anything, utility companies can literally blow up and burn down cities and no amount of fines or paying for the damages done will matter to them.

Monopolies can always just pass the cost of the fine to their customers.


Penalties would also incentivise businesses to hide data breaches.


That is the worst case outcome of penalties, and it carries significant risk of whistle blowing. The default case will be compliance, because compliance is simply cost of business, something businesses understand well.

Meanwhile, currently businesses are doing shit all about data breaches except handing out the absolutely useless "2 years identity monitoring", so from a consumer view it really can't get much worse.

In general, the idea that penalties make people hide their bad behavior, so we shouldn't penalize bad behavior, is just extremely misguided. Because without penalties, we normalize bad behavior.


Are strong whistleblower protections what’s needed to balance this?

As an Australian I am absolutely horrified that we continue to put people in jail who have blown the whistle on the government here, and it makes me think that large organisations are absolutely terrified about strong whistleblowing protections.

This all suggests to me that whistleblower laws would be very effective.


Whistleblower is a very revealing thing to call Mr. Assange.


David McBride and Richard Boyle. Both tried the official channels then whistleblower channels. Both made some mistakes but all in the public interest. Aussie gov treated them shamefully.


Witness K and Bernard Collaery came to mind when I was writing it. They blew the whistle on illegal espionage used to pillage the resources of our tiny neighbour, and the government threw the book at them. Absolutely shameful.


I understand that Wikileaks is controversial but I don't think there is any dispute that he has acted in the role of whistleblower to some extent. But that's not really the point I'm trying to make, so I've removed the reference.


I think I'd argue for a sui generis classification, which does partake somewhat of the whistleblower, but it seems like calling Napoleon a general. He was certainly that, at times. Apologies for the nit-picking in any case.


Another example would be David McBride who was in the Australian military and blew the whistle on war crimes. He recently got sentenced to jail while actual exposed war criminals are free.


Make laws that protect whistleblowers from civil and legal penalties, punish those who attempt to illegally hide data breaches, including jail time in the worst cases. That would solve it. Individual employees don't care enough to hide it (they just work there), and leadership wouldn't dare risk a whistleblower which would cause them to face criminal penalties.


So you make it a crime to hide the existence of a data breach for more than X amount of time for the purpose of figuring out exactly what happened. I don't know off the top of my head how long X should be. 30 days? 60?


Sounds like a recipe for willful ignorance. Why put any effort into checking for data breaches if it would only hurt you?


Which should result in even larger penalties, hopefully those penalties can also be levied against the individuals that were associated with hiding the data breaches. Mid level manager that gets an email from Snowflake saying that there's been unusual activity who then hides that information or doesn't look into it? Fine 'em (and AT&T). Mid level manager tells a random engineer that DOES look into it and finds that they've been hacked but hides it? Fine AT&T and this person even more!


This appears to be an argument against law itself.


GDPR has fines for data breaches


Nothing happened to Experian, and those clowns have beaches every year. The USA has so far proved that we don't care about privacy and don't believe data is real.


> don't believe data is real.

Oh but they do, try taking some data that belongs to a corporation and see how quickly law enforcement responds. Aaron Swartz found out the hard way

It’s only when you steal personal data that nobody cares.


The AT&T app and website are so bad it takes way longer than 1 minute to log in to e.g. pay your bill. The United States needs to raise the bar for large-cap negligent operators and fine the company enough to make shareholders listen.


In approximately 100% of cases, if your intuition is to say "this company is too large should be fined/regulated more," what you should actually say is "this company is too large and should be broken into many smaller entities."


Or nationalize parts of it, as has been done for electricity, water, and the courts.


I understand the desire to push for this but I also know first hand it would make things worse specifically around competency. I've had countless calls and meetings with state and federal agencies that could not grasp even the simplest of technical issues and this was with the very people charged with the responsibility for their systems. On the state level, explaining to the California DMV repeatedly that they may not use RC1918 address space in public MX records and expect emails and faxes to get through. That was an actual battle. Or arguing and escalating with 3 letter federal agencies that we will not "install their server certs" on our tens of thousands of servers and they must install the intermediate certs correctly. I wish I could share who that was because nobody would believe me... There are countless battles I've had with these agencies. I do not want more of these people running critical and sensitive systems. It's bad enough that leaders in companies like AT&T bend over backwards to just hand over data to them. I've had to hand over the data, looking the other way, giving unfettered unlimited unmonitored access to mainframes without warrants. This was at a company that was gobbled up by AT&T. Or being told to let a scammer with access to an SS7 link scam infinite people because they are paying for the link. Governments running these systems would be the wolves running the hen-house.


We should break down AT&T. Oh wait. We tried already and re-consolidated? Ow.


Part of breaking them up is supposed to be not letting them re-consolidate. Mergers involving any entity that already has 15% market share should just be flatly disallowed.


This is not the AT&T Judge Harry Greene broke up. This AT&T is a roll up of most of the RBOCs the breakup created.


Yes, a man never steps in the same river twice.

Not really the point though, is it.


The correct way is to follow what all other engineering and trade (medicine/law) already follow.

Some software engineers are licensed. A company must hire these software engineers, and any changes to what data is saved or how is saved must be signed by these engineers. If a breach occurs, an investigation occurs and if these licensed software engineers are found to be negligent, they lose their license. If they are found to be at fault, they get criminal penalties.

This, of course, must be coupled with penalties for management personals as well.


This kind of system has consistent led to regulatory capture by the licensed industry. Even the mechanism of operation de facto assumes a significant gatekeeping barrier to getting a license, since otherwise companies would just pick one most willing to cut corners to save costs, or pay the license fee to get greenhorns certified because that costs less than adding two years to the development schedule to do it well. Making everything cost quadratically more than it already does is not a good solution.

What you want here is for them not to be holding the data to begin with. The solution to which is to just let customers sue them. Not for $0.30 and "free credit monitoring" but for actual money. Then companies can choose whether they want to mitigate their risk by doing actual security or by not storing the data to begin with, but most likely the second one is their better option.


> This kind of system has consistent led to regulatory capture by the licensed industry.

That is indeed the intention. To counteract the financial incentives of shareholders (which result in bridges collapsing or data breaches) with the financial and legal incentives of a special class of employees - licensed engineers.

The reasons this works better than letting people sue after the accident has already happened [1] is because that it gets the incentives right. In sue-after model the responsibility before an accident has happened to make the product safe is quite diffuse across the whole organization, and the decision makers (C-suite) do not in fact have the expertise to determine if the product is unsafe.

Giving licensed engineers veto powers over the entire C-suite and the shareholders is indeed how you concentrate responsibility at a single point. This type of licensing model has worked wonders in civil engineering, electronics engineering, law, medicine etc in improving safety standards for the public. Software engineering is not special.

[1] Think letting the victims of the bridge collapse suing as the only method of preventing bridge collapses. This is not how things operate.


> To counteract the financial incentives of shareholders (which result in bridges collapsing or data breaches) with the financial and legal incentives of a special class of employees - licensed engineers.

But now you have a special class of employees whose incentives are wrong in the opposite direction. They make decisions that are overly conservative, because they lose their license if the bridge collapses but by design no one can overrule them if they unnecessarily make the bridge cost four times as much.

This not only makes the bridge cost many times more, it thwarts the original intention because now building new things is so expensive that we avoid doing it and instead continue to use the old things that are grandfathered in or maintained well past the end of their design life, which is even less safe in addition to being less efficient. This is why so much of our infrastructure is crumbling -- we made it prohibitively expensive to build new.

> This type of licensing model has worked wonders in civil engineering, electronics engineering, law, medicine etc in improving safety standards for the public.

And these things are now unaffordable as a result. Ordinary people have been priced out of legal representation and are being bankrupted by medical bills. It's not a solution, it's just a new problem.

> Think letting the victims of the bridge collapse suing as the only method of preventing bridge collapses. This is not how things operate.

The reason this doesn't work in that specific case is that the damage from a bridge collapse can easily exceed the entire value of the bridge-building company, so then if you go to sue them they just file bankruptcy. Which they know ahead of time and then don't have the right incentives to prevent the damage. That hardly applies to the likes of AT&T, which is not going to be bankrupted by a large damages award, but is going to want to avoid paying it out.

> In sue-after model the responsibility before an accident has happened to make the product safe is quite diffuse across the whole organization, and the decision makers (C-suite) do not in fact have the expertise to determine if the product is unsafe.

Neither are they expected to. They're expected to hire someone who does, but then they have the incentive to balance the cost against the harm, so they neither end up with the incentive to abandon quality nor the incentive to make everything prohibitively expensive.

A real issue here is limited liability. The CEO comes in, hires low quality workers or puts them under unreasonable time constraints, gets a bonus for cutting costs and is then at another company by the time the lawsuit comes. Forget about licensing, make them personally liable for what happened under their watch (regardless of whether they still work there) and you'll get a different result.

Limited liability should be for shareholders, not decisionmakers.

That way the same party suffers both in the case of unreasonably high costs and in the case of unreasonably low quality and doesn't have a perverse incentive to excessively sacrifice one for the other.


>But now you have a special class of employees whose incentives are wrong in the opposite direction. They make decisions that are overly conservative, because they lose their license if the bridge collapses but by design no one can overrule them if they unnecessarily make the bridge cost four times as much.

This is not a bug. Having fewer bridges that don't collapse is better than having one fall over every day which is what's happening with data leaks now.


Its a bug.

We now have < 10 megabanks in the US, any of which can bring down the entire US economy.

Instead , we could have 1000s of smaller banks. Tons of smaller banks is the natural state of things, like restaurants. This was true before the banking cartel, TARP, ZIRP, most recently, PPP (genius backdoor to bail out wall st.). In such system, any 1 collapsing bank wont bring the entire system down.

Having fewer bridges means that inevitable when they collapse, there will be far more victims and the event will be catastrophic.

Tech is one of the few bright spots in our moribund economy. Don't introduce a cartel that will blow up eventually.


>Having fewer bridges means that inevitable when they collapse, there will be far more victims and the event will be catastrophic.

I honestly don't even know where to start with this.


It isn't safer to make building new bridges prohibitively expensive, because the result is that new bridges don't get built and then existing bridges are overused and extended beyond their design lifetime. And they're carrying several times more traffic when they ultimately fail.

It's the same for all the rest of it. You're not helping people to nominally make something better unless the better thing is actually available to them.


No, because making bridges prohibitively expensive means you are mono-culturing engineering.

You are only succeeding at keeping 1 engineering firm alive, who can afford to bid and build mega-expensive projects.

Eventually, the megafirm will adopt poor practices. And now, those practices will literally spread out across every single bridge built in the world. You now have a mono-culture of engineering that includes cancer as part of its DNA. Congratulations - you have granted a monopoly to a firm that sells ticking time bombs to your own citizens

This is, in essence, NASA, banking, Fannie/Freddie.

Errors are a part of nature. They must happen. We are humans and fallible. The question, when errors do happen, how big and hurtful will they be? Small or big ?

You can't buy your way out of human error and hubris. This is the fatal conceit.


It's a bug. You can't make everything cost more without bound or ordinary people can no longer afford to make rent. There has to be balance.


You can't make houses cheap without bound either, you turn them into death traps quite quickly.

Everything related to personal data is currently at the slum without firecodes level. But it also has a few unregulated nuclear reactors in the mix.


This is the excuse used to justify the regulatory capture. There is a mile of difference between simply having fire exits vs. minimum parking requirements, de jure or de facto minimum unit sizes and density constraints. You need something that can distinguish these things, not something that provides the trash choice between none of them or all of them together.


Using regulatory capture as an excuse why we can't stop babies from eating lead is the most brain dead take from the American left since they replaced class with race.


I'm not American but isn't a fetish for deregulation a hallmark of your political right, not the left?


Up until recently I agreed with this position because I, like you, thought that this was how licensed engineering disciplines worked. I thought that if you sign off on something you put your career on the line, making the potential penalty for signing off on bad designs worse than the one for saying no to a pushy boss.

Then the MAX crashes happened and Boeing is about to negotiate a sweetheart plea deal and there's absolutely zero talk of any of the engineering licenses that were used to sign off on the bad systems getting revoked.

If the licensing system doesn't actually include a threat of career-ending penalties for knowingly signing off on bad designs, or if the system allows executives to bypass engineer signatures, then it seems like the general consensus on here is right: it's useless overhead at best and regulatory capture at worst.


Wait, your saying the software engineers behind MAX8 debacle were licensed? What licenses?


If AT&T had spent more on security, this would not have happened. I absolutely do not believe individual engineers should be held liable.


The way this works in civil engineering is that the engineer refuses to sign off on an unsafe design. If costs have to increase to address the issue, then they do. If management doesn't budge, then they bleed money while twiddling their thumbs staring at an unapproved design.


Be careful what you wish for… civil engineering is a terrible awful bureaucratic profession.

The crowd here on HN intends to make fun of governments and banks and similar regulated entities… but smug startup culture will not exist if you got what you say you want.


To be fair, AT&T, Equifax, United Health, and Peraton are probably as far away from startup culture as it gets.


"Move fast and break things" isn't an appropriate philosophy for critical public infrastructure


how do you know? maybe they were spending too much on security, but it was going to useless or counterproductive measures like crowdstrike, compliance training, or virus scanners. money is no substitute for competence, as steve jobs's death shows


If you're going to do that, you're going to need to get universities to treat computers as an actual applied discipline. Physical engineers at least get some practice working with numbers around real materials.

I've met too many recent university graduates that don't even know you need to sanitize database inputs. Which, not their fault, but the university system as it currently exists in relation to software is not set up do do the thing you're asking.

The alternative is to have a really long exam (or a series of them like actuaries do?). Here are 10 random architectures. Describe the security flaws of each and what you would change to mitigate them.

The other change that needs to be made, is that engineers need to be able to describe the bounds of their software. This happens in the other engineering disciplines. A civil engineer can design a bridge with weight capacity X, maybe a pedestrian bridge. If someone builds it and drives semi-trucks over it, that's kinda their problem (and liability).

We would need some sort of way to say "this code is rated for use on an internal network or local only" and, given that rating, hooking it up to the open internet would be legally hazardous.


I actually agree with you but this is a dangerous opinion to express on this forum, where move fast and break things is seen as the one true path.


I am not a historian, but I expect there would have been significant pushback as well by other types of engineers back in the day when their profession was regulated.

It's not surprising. But what should not be surprising is that sooner or later, software engineering will be regulated [1]. The question is simply whether software engineers will let politicians do it to them in an unreasonable way, or whether they do it themselves in a more reasonable way.

[1] Well, it has already begun. EU has the notion of the GDPR Data Protection Officer [1] https://www.gdpreu.org/the-regulation/key-concepts/data-prot...


Nothing stops companies or individuals from getting audits or from developing a voluntary license/certification. Consumers that want the added protection can pay the premium. But to force an entire industry into regulatory capture where its unnecessary seems foolish.


Privacy/protection of personal data is slowly being recognized as a Right across the world, as it should.

The standard legal philosophy across the world is that you can't actually predicate protection of a right on ability to pay (under reasonable limits). So, for example, nobody gets to build unsafe bridges and charge less for it, because it violates the right to life.


Are there any other analogies around 'endangering', because that's what happens when this info leaks to criminals


You want a P.Eng (or equivalent) to sign off on anything that involves data? That won’t solve the problem but will dramatically slow down the pace of innovation. And all the while, it will funnel money further into regulated professions instead of into actually securing software.

This is precisely how we end up in a world where we’re all running twenty five year old software.


> This is precisely how we end up in a world where we’re all running twenty five year old software.

Linux?


Are you claiming that Linus Torvalds is a P.Eng (or equivalent)? He doesn’t so that’s a very poor comment. As for Linux, it has changed constantly over those 25 years so that’s not a coherent argument either.


Where do you draw the line? Does that mean you need a license to write Excel formulas?


The license is only for protection of user personal data - names, dob, address, id documents data, credit card data etc, and not, say, how many upvotes you have on HN. The vast majority of sites and software do not need to store any of this data. And the vast majority of code that is written has nothing to do with user personal data.

The larger legal change has to happen is

1. Do not store user personal data if you don't have to (EU already has laws about it)

2. If you store user personal data, you have to guarantee up front that it is stored and processed in a safe way (what I am suggesting). Of course, exception can be made for sites/software with small number of users, or give some time bound leeway, so startups can grow before having to hire a licensed engineer.


Who is ultimately responsible, though when data is stolen in this fashion? The analyst who ETL'd this to Snowflake without MFA enabled? Or maybe the employee who inadvertently installed a data sniffer that captured usernames and passwords? Really want to send your coworkers to jail for falling for a phishing attack?

If you want corporate-death-sentence level fines, are you willing to work in environment with exceedingly strict regulatory oversight? Will you work from an office where the computing infrastructure is strictly controlled? Where you can't bring personal devices to work? Where you have no privileges to alter your work station without a formal security review?

Why not advocate for more resources to capture and try the actual criminals? Or, as elsewhere in this thread, simply make this kind of data collection illegal?


> If you want corporate-death-sentence level fines, are you willing to work in environment with exceedingly strict regulatory oversight? Will you work from an office where the computing infrastructure is strictly controlled? Where you can't bring personal devices to work? Where you have no privileges to alter your work station without a formal security review?

If it means that privacy and safety is actually respected then yes. Working in an environment with "exceedingly strict" regulatory oversight would be a reassurance that observed violations will be dealt with in a timely fashion instead of put in the backlog and never addressed.

> Why not advocate for more resources to capture and try the actual criminals?

Yes, why not? While we're at it, let's try and capture the easily-spotted criminals who perform the most trivial of attacks to servers. Just open up your SSH server logs and start going after and preventing the fecktons of log spam that hide real attacks.

> Or, as elsewhere in this thread, simply make this kind of data collection illegal?

Making something illegal is great! Unfortunately it doesn't really do anything to help people after it's been stolen a second time (first time was by AT&T if it were illegal).


If the data collection becomes illegal, what's the penalty for breaking that law? We're back to figuring out an appropriate punishment.


At&t is up there with defense contractors with how intertwined their businesses are with the DoD. They're basically an extension of the intelligence agencies here in the US. They don't have consequences, much like Boeing.


Personal data cannot be secured. The only way is to not store it. That will (imaginationaly) cost companies in lost revenue for being unable to mine and sell it. Only government can make laws against a company taking your personal information and selling it. Even passwords shouldn't be stored by a company.

The years of lost time argument is disingenuous. Over that number of people, 209 years of lost time from 700 million years of lives is nothing.


There are lots of companies that take security seriously and don’t lose their customers data. Which is good, because there are companies that need to hold customer data.

Companies that don’t take security seriously and lose peoples data should be punished accordingly.

Companies that sell customers data should be identified.

But if we treat them all the same, then we let the bad companies off the hook, and punish the responsible companies unfairly.


there are companies that have already had their customers' data exfiltrated and will have it exfiltrated in the future, companies that will only have it exfiltrated in the future, and companies that are about to be dissolved. there is no fourth category. computer security is not currently achievable; the best we can hope for is to contain the damage from the inevitable breaches and reduce their frequency

new security holes get introduced faster than old ones get patched, and that will remain true for the foreseeable future


I’d take it a step further. If a technology is impossible to secure it shouldn’t be used. Maybe it’s time to rethink all the parts of our lives we’ve handed over to software.


What current technologies do you believe are possible to secure?

I am sympathetic to the overall sentiment here, but between any web browser + server stack you are looking at hundreds of millions of lines of code written in unsafe languages.

Add on the human factor and there is just no hope of really securing this.


sel4, tweetnacl on an avr, pdf/a, html3, gzip, lwip, etc., running on purpose-built hardware. too bad it's not self-hosting yet


Whether or not it's disingenuous, it's our time that didn't need to be wasted in the first place by them not storing phone records


I agree with that. I just don't like big numbers being used to cause emotional responses without proper context. Probably on a spectrum, but it's my beef :)


That's quite a CPNI incident. Wonder what their fine will be. [0]

[0] https://www.tlp.law/2023/08/01/fcc-proposes-20-million-fine-....


Alternatively, we need sharper teeth around the consequences of this data breath.

Why are we using SMS for 2FA everywhere? Why does AT&T have to have residential addresses and KYC for all of its customers? These are the things that should be banned. The government official that mandated all this crap should be forced to sleep with scorpions for 9 years and stink bugs for 3 more years.

If so the leak would be of much less consequence.


Exactly. There is currently no meaningful penalty when a company fails to protect private data or violates its own privacy policies, so of course they continue to do these things because each either makes them more money or costs them less money.

Prison time being on the table for officers of the corporation is the only thing that will change this behavior.


But hey, in 5-7 years there will be a settlement to the inevitable class action lawsuit and each of these customers (that fills in a form, ensuring only a small fraction actually do) gets a $3.75 credit on their next bill. The lawyers will get 30% of the settlement and each walk away with several million dollars. Justice! chef’s kiss


If we go with the logic of the grandparent comment, where were can measure the harm by adding up a minute of time wasted across millions of people to get a big amortized number, it seems commensurate that each of those people can be compensated for their minute of wasted time with a few dollars.


Idk man, the lawyers who made the rules say it's a great system.

Like, it might be an unending atrocity beyond all human comprehension, but, $666/hr soothes a lot of conscience and quiets a lot of tongues.


This is from an email I got yesterday from PayPal:

"Google Referrer Header Privacy Settlement has sent you $0.11 USD."


This is deeply accurate


If you're going to start holding companies accountable for wasting people's time then AT&T has a lot more to answer for than this one little event.


Everyone says what needs to happen. Every thread has this same exact post. We all know what needs to happen. How _would_ this ever happen? This is a board of innovators -- innovate!


No one here can force AT&T to spend more money on IT. If they do, even briefly, everyone involved will be laid off and outsourced within a few years.


we all know this does not need to happen, if 'we' are people familiar with the quality of software in already-regulated environments


Yeah, you're right. Data breaches are essentially just slaps on the wrist to companies like AT&T. Maybe it's possible to fine them based on the proportion of the userbase that was affected and the profits they generated for a certain time period.

I wonder if this will push companies to stop using external vendors to store and process data. If companies stored all of their info in house, it would prevent the case where compromising one vendor compromises everyone's data. But it would also mean that each individual company needs to do a good job securing their data, which seems like a tall ask.


The reason some companies use external vendors is to outsource the risk.


I propose that the fines should be based on what the data would be sold for on a dark web forum. These breaches should be exponentially more expensive, which would incentivize companies to retain less sensitive data.


33% of all living americans? how can it be that much?


There are basically 3 carriers in the US, AT&T, T-Mobile, and Verizon; other carriers use the networks of those 3.


Recount


The breach here was not against AT&T but against a cloud computing company called Snowflake.

Cloud computing companies, so-called "tech" companies, and the people who work for them, including many HN commenters, advise the public to store data "in the cloud". They encourage the public, whether companies or individuals, to store their data on someone else's computer that is connected to the open internet 24/7 instead of their own, nevermind offline storage media.

Countless times in HN threads readers are assured by commenters that storing data on someone else's computer is a good idea because "cloud" and "_____ as a service". Silicon Valley VC marketing BS.

"Maybe pierce the corporate veil and criminally prosecute those whose negligence made this possible."

Piercing the veil refers to piercing limited liability, i.e., financial liability. Piercing the veil for crimes is relatively rare. Contract or tort claims are the most common causes of action where it is permitted.

There is generally no such thing as "criminal negligence" under US law. Negligence is generally a tort.

As for fines, if there were a statute imposing them, how high would these need to be to make Amazon, Google, Microsoft or Apple employees and shareholders face "real consequences".

Is it negligent for AT&T to decide to give data to a cloud computing company such as Snowflake? HN commenters will relentlessly claim that storing data on someone else's computers that are online 24/7 as a "service", so-called cloud computing, is a sensible choice.

Data centers are an environmental hazard in a time when the environment is becoming less habitable, they are grossly diminishing supplies of clean water when it is becoming scarce, and these so-called "tech" companies are building them anyway.

Data centers are needed so the world can have more data breaches. Enjoy.


>The breach here was not against AT&T but against a cloud computing company called Snowflake.

It wasn't really a Snowflake breach, if it's like the other Snowflake data leaks, AT&T didn't set up MFA for a privileged account and someone got in with a password compromised by other means. For smaller companies I'd be willing to put more blame on Snowflake for not requiring MFA, but AT&T is large enough to have their own security team that should know what they are doing.

This is yet another wakeup call for all companies - passwords are not secure by themselves because there are so many ways for passwords to be leaked. Even though SMS MFA is weak, it's far better than a password alone.


If it helps to understand the comment, change the word "breach" to "unintended redistribution of data".

The comment is about the risk created by transferring data to a third party for online storage.

It is not about the specific details of how data is obtained by unauthorised recipients from the third party.

The act of storing data with third parties who keep it online 24/7 creates risk.

Obviously, the third parties will claim there is no risk as long as ["security"] is followed

If we have a historical record that shows there will always be some deficiency in following ["security"], for whatever reasons,^1 then we can conclude that using the third parties inherently creates risk.

1. HN commenters who focus on the reasons are missing the point of the comment or trying to change the subject.

If customer X gives data to party A because A needs the data to perform what customer has contracted A to do, and then party A gives the data to party B, now customer X needs to worry about both A _and_ B following ["security"]. X should only need to trust A but now X needs to trust B, too. If the data is further transferred to third parties C and D, then there is even more risk. Only A needs the data to perform its obligation to customer X. B, C and D have no obligations to X. To be sure, X may not even know that B, C and D have X's data.

A good analogy is a non-disclosure agreement. If it allows the recipient to share the information with third parties, then the disclosing party needs to be concerned about whether the recipient has a suitable NDA with each third party and will enforce it. Maybe the disclosing party prohibits such sharing or requires that the recipient obtain permission before it can disclose to other parties.^2 If the recipient allows the information to be shared with unknown third parties, then that creates more risk.

2. Would AT&T customers have consented to their call records being shared with Snowflake. The people behind so-called "tech" companies like Snowflake know that AT&T customers have no say in the matter.


> Laws related to data breaches need to have much sharper teeth. Companies are going to do the bare minimum when it comes to securing data as long as breaches have almost no real consequences. Maybe pierce the corporate veil and criminally prosecute those whose negligence made this possible. Maybe have fines that are so massive that company leadership and stockholders face real consequences.

I really dislike this attitude.

AT&T were attacked, by criminals. The criminals are the ones who did something wrong, but here you are immediately blaming the victim. You're assuming negligence on the part of AT&T, and to the extent you're right, then I agree that they should be fined in a bigger manner.

But the truth is, given the size and international nature of the internet, there are effectively armies of criminals, sometimes actually linked to governments, that have incredible incentives to breach organizations. It doesn't require negligence for a data breach to occur - with enough resources, almost any organization can be breached.

Put another way - you trust a classical bank, with a money, to secure your money from criminals. But you don't expect it to protect your money in the case of an army attacking it. But that's exactly the situation these organizations are in - anyone on Earth can attack them, very much including basically armies. We cannot expect organizations to be able to defend themselves forever, it is an impossible ask in the long run. This has to be solved by the equivalent of a standing army protecting a country, and by going after the criminals who do these breaches.


No, the root-cause is not AT&T were "attacked, by criminals"; there's a much wider issue involving Snowflake and multiple customers. The full facts are not in yet.

AT&T's data was compromised as one of Snowflake's many customer breaches (Ticketmaster/LiveNation, LendingTree, Advance Auto Parts, Santander Bank, AT&T, probably others [0][1]), which occurred and were notified in 4/2024 (EDIT: some reports says as far back as 10/2023). Supposedly these happened because Snowflake made it impossible to mandate MFA; some customers had credentials stolen by info-stealing malware or obtained from previous data breaches. Snowflake called it a “targeted campaign directed at users with single-factor authentication”. The Mandiant report tried to blame unnamed Snowflake employee (solutions engineer) for exposing their credentials.

How much responsibility Snowflake had, vs its clients, is not clear (for example, seems they only notified all other customers May 23, not immediately when they suspected the first compromise). Reducing the analysis to pure "victims" and "criminals" is not accurate. When you say "criminally prosecute those whose negligence made this possible", it wouldn't make sense to prosecute all of Snowflake's clients but not Snowflake too. Or only the cybercriminals but not Snowflake or its clients.

[0]: The Ticketmaster Data Breach May Be Just the Beginning (wired.com) https://news.ycombinator.com/item?id=40553163

[1]: 6/24 Snowflake breach snowballs as more victims, perps, come forward (theregister.com) https://news.ycombinator.com/item?id=40780064


I think the simple explanation here is likely not that Snowflake has some giant undisclosed breach allowing access to it's customers data, but actually that snowflake instances are just insecure by default in fairly basic ways.

Snowflake built its business on making it really easy for data teams to spin up an instance and start importing a massive amount of their org's data. By default, the only thing you need to access that from anywhere on the internet is a username and a password. Locking down a snowflake instance ends up requiring a lot more effort.

And very few users actually end up interacting with snowflake directly -- they're logging into a BI tool like Looker, which accesses snowflake behind the scenes. So the fact that an org's Snowflake instance doesn't require being on the VPN or login via okta/azure ad/whatever SSO can fly under the radar pretty easily. Attackers realized this, and started targeting snowflake credentials.

Seems similar to all the S3 breaches that have come out over the years -- it's not that s3 has some giant security hole (in the traditional sense) -- it was just really easy throw shit on S3 and accidentally make it totally public.


Yes, like I said Snowflake apparently knew very few of its many customers were using MFA.

Reports say password-stealing breaches were happening as far back as Oct 2023. But Snowflake didn't notify people (customers, FBI, SEC) until May 2024.


> Supposedly these happened because Snowflake made it impossible to mandate MFA

What's crazy is that Snowflake made MFA enforcement available only 5 days ago.


I think the implicit assumption is that the vast majority of these breaches are obviously preventable (basic incompetence like leaving a non-password-protected database connected to the public internet is common).

A better analogy is not a bank defending against an army, but a bank forgetting to install doors, locks, cameras, or guards. _Yes_, the criminals are the root cause, but human nature being what it is it's negligent to leave a giant pile of money and data completely unprotected.


> I think the implicit assumption is that the vast majority of these breaches are obviously preventable (basic incompetence like leaving a non-password-protected database connected to the public internet is common).

Some breaches are certainly preventable. But is that the case here? I didn't see the technical details, I think they aren't released yet, but this is the conclusion everyone seems to jump to automatically, without necessarily good reason.

More importantly - these companies employ thousand of employees, all of whom could be doing something wrong that is causing a security threat. And there are thousands, maybe tens of thousands of people trying to find their way in. my point is that even without any negligence, if you have thousands of people trying to hack your company every day for years, it's easy to slip up, even if it's preventable-in-hindsight.

One of the first things you learn in working in security is that there is no perfect security, and you have to understand the nature of the threat you are facing. For these companies, the threat might very well be "North Korea decides to dedicate state-level resources to breaking into your company, plus thousands of criminals are doing the same every day". How is any company supposed to protect against that?


Which implies that the company is negligent in hoarding the data in the first place. If you admit that there is no effective security for sensitive data, you admit that holding the sensitive data in the first place is negligent. Create real sanctions for the loss of the data, follow through on them, and then companies will do better.

Mind you, Snowflake is the problem here, not AT&T, if it was their leak. AT&T is big enough that no meaningful sanctions will fall on them. It's not like they fell out of the sky and killed a bunch of people.


Would assume someone would notice all the data that is being transferred.

And if this turns out to be a sophisticated attack then who’s to say they didn’t backdoor a bunch of systems? I heard a talk from a big Norwegian company that got attacked. Every single server, every single switch, every single laptop, all had to be reformatted and reinstalled. I assume that AT&T would have to end up doing the same.


To run with the analogy some more:

The bank is expected to have people trying to break into it. Sure would be nice if they didn’t, but that’s not the reality. As such, failing to provide adequate defences is absolutely a failing on the banks behalf.

If they were keeping even more data than necessary, that’s just extra failure on their behalf.


In this analysis, the effort the bank puts towards defending themselves is relevant. We wouldn't blame the bank for an army attacking them, but if they left the door unlocked and the neighbours kids made off with your money you very rightly would feel differently.


Which does make me wonder why we never really hear of banks being attacked and robbed in such a way? One would think they would be the most obvious targets to throw an army of criminals at.


It's pretty much the definition of a functional state that the police can gather more resources faster than any group of criminals. By the time you gather enough criminals to hold off the police for even a few minutes, most of the time, combined with the sibling's point of not that much physical money being stored at banks, there's not much money to go around to that many people.


Banks don't really physically store much money any more.

And more importantly - the police exist. If someone were to actually physically rob a bank, enormous resources would be spent trying to find and capture them, then they'd be thrown in jail.

If they could do the same thing, but also be physically located in another country while doing it, with no chance at all of going to jail... more banks would be robbed!


Crypto Exchange has entered the chat.


If a breach is so inevitable like you say, then it's negligent to store the information in the first place. They're accumulating and organizing data with the inescapable conclusion of handing it out to criminal organizations.


The customers are the victims, not the companies.

You picked the wrong point to counter with. The real problem is that the corporate decision-makers who bear the most responsibility will never be held accountable. They will always be able to shift blame to someone below them in the corporate hierarchy.


Your point needs more emphasis. The idea that the victim is anyone other than the customer is so wrong.

The other points are dubious too.

> But the truth is, given the size and international nature of the internet, there are effectively armies of criminals, sometimes actually linked to governments, that have incredible incentives to breach organizations. It doesn't require negligence for a data breach to occur - with enough resources, almost any organization can be breached.

So given that this is known, why was the data stored such that it could be taken? Why was it kept at all? Oh.. to sell.

> Put another way - you trust a classical bank, with a money, to secure your money from criminals. But you don't expect it to protect your money in the case of an army attacking it.

Yes I do expect that. And it’s protected and insured by my government.


No way. If I were running a small MSP, I was breached, and my customers were infected I'd be sued out of business immediately. The fact that they are a titan means they should be that much more vigilant.


Companies could also stop storing customer information for purposes unrelated to the core product that you are using..... But that's not going to happen because it's still far more profitable to mine customers data even with the risk of theft or breach.


<< AT&T were attacked, by criminals. The criminals are the ones who did something wrong, but here you are immediately blaming the victim. You're assuming negligence on the part of AT&T,

I am sure LEOs will do what they are paid to do and catch criminals. In the meantime, I would like to focus on service provider not being able to provide a reasonable level of privacy.

I am blaming a corporation, because for most of us here it is an ongoing, recurring pattern that we have recognized and corporations effectively codified into simple deflection strategy.

Do I assume the corporation messed up? Yes. But even if I didn't, there is a fair amount of historical evidence suggesting that security was not a priority.

<< Put another way - you trust a classical bank, with a money, to secure your money from criminals.

Honestly, if average person saw how some of those decisions are made, I don't think a sane person would.

<< But the truth is, given the size and international nature of the internet, there are effectively armies of criminals, sometimes actually linked to governments, that have incredible incentives to breach organizations. It doesn't require negligence for a data breach to occur - with enough resources, almost any organization can be breached.

Ahh, yes. Poor corporation has become too big of a target. Can you guess my solution to that? Yes, smaller corporation with MUCH smaller customer base and footprint so that even if the criminal element manages to squeeze through those defenses that the corporation made such a high priority ( so high ), the impact will be sufficiently minimal.

I have argued for this before. We need to make hoarding data a liability. This is the only way to make this insanity stop.


"still-unfolding data breach involving more than 160 customers of the cloud data provider Snowflake.'

So what is Snowflake normally doing with all that AT&T data? Redistributing it to "marketing partners"? Apparently. Snowflake's mission statement, from their web site:

"Our mission is to break down data silos, overcome complexity and enable secure data collaboration between publishers, advertisers and the essential technologies that support them."

So this was not, apparently, a break-in to the operational side of AT&T. Someone unauthorized got hold of data they were already selling to marketers. Is that correct?


> break down data silos

[x] Objective Achieved


This would probably be no different if someone like Salesforce had a breach and a large customer of theirs being impacted. There are large companies using SaaS services for a chunks of their back office stuff.


It’s a cloud database, mostly olap. The ATT account was secured with a bad password and no mfa.


Its not just a bad password, it was a password that was exposed to a info stealer in some way. It might of been reused or overshared into some system that got exposed. From what I understand someone got a huge info stealer dump and started putting two and two together and noticed all these scraped passwords and tried them on snowflake


ATT could be using Snowflake for internal analytics


It's not "internal analytics", because a) 90% of the data was former customers and b) it has location data but timestamps were removed, so it's social-graph information plus location. Start asking yourself what sorts of end-users want to pay for the entire social-graph of 77m, regardless whether those customers never make a phone call again.

"Alternate credit scoring, hyper-targeted marketing and more... an emerging trend of companies building partnerships with telecoms to power use cases across multiple industries." was the blurb for the unit Snowflake specially set up for Telco data in early 2023 touting "location data", but this product is not aimed at the telco's use-case; coincidentally this was also around the time Snowflake was touting integration with GenAI.

(It's not "competitor analysis" either, because if it was they would have obscured the 68m former phone numbers to prevent abuse by direct-marketing.)

[0]: "Unlocking the Value of Telecom Data: Why It’s Time to Act" https://www.snowflake.com/blog/telecom-data-partnerships/


Snowflake PR, from the link above: "What makes telecom service providers unique is that they have access to consumer location data. For most other industries, a consumer can go into their phone’s privacy settings and turn off the location access in the smartphone app. But in the world of telecom, as long as the phone is connected to a network, the telecom provider can use triangulation to find the approximate location of a consumer. This is why there is an emerging trend of companies building partnerships with telecoms to power use cases across multiple industries from competitor intelligence, alternate credit scoring, hyper-targeted marketing and more."

That pretty much says it.

It's disappointing that TechCrunch didn't point this out. Nor did the New York Times.[1] Yet it's right there on Snowflake's site.

[1] https://www.nytimes.com/2024/07/12/business/att-data-breach....


- [EDIT: I confused the details of this AT&T breach with the other (2019) one disclosed on 3/2024: 77m AT&T/MVNO customers, 90% of them former customers]. This one is 110m customers, presumably all their current customerbase. But it's still unlikely this is "internal analytics" (for telco business-case) given the timestamps were removed but location data included.

- Yes about Snowflake's cloud telco unit explicitly marketing the fact that telco data contains location. See my updated post: https://news.ycombinator.com/item?id=40949640


Why would the removed timestamps make the data have no value for internal analytics?

It's possible they were operating from a privacy first principle and storing only the exact data they needed for a specific internal objective.


I pointed out previously that the logs contained unobscured phone numbers, so no privacy. You can deanonymize just by reverse-searching the phone number in data broker datasets. They also included the location data for each call/text. Yet no datestamp. That's weird.

As to who would be the end-user for the social graph of 110m users with location data but without dates and times, show us any use-case that's telco-related (not even spam prevention). It's not going to be. You'd want timestamps to disambiguate who are they contacting at work, at home, on their commute, at weekends, etc. So without that it'll be more like alternate credit scoring, surveillance, national-security. And why was Snowflake so eager to promote industries building business models on users' location data? For growth, sure, but who is this mystery industry sector that suddenly sprang up at the same time as GPT-4?


More corroboration from another commenter on TechCrunch: https://techcrunch.com/2024/07/12/att-phone-records-stolen-d...

> [Eric Scott] AT&T was using the data to build a social graph. They didn't record the date and time because they didn't need it.

That isn't "internal analytics". The end-customers who would be buying that aren't telcos. Like I said.


One of the usecases of Snowflake is to give access to a dataset to multiple teams in your company, while filtering what each team can see : https://www.snowflake.com/en/data-cloud/workloads/collaborat...

Service A can access the dataset with the location hidden while Service B can access the dataset with the timestamp hidden while Service C can access the full dataset.

So Snowflake probably has the full dataset, and the account that was used in the breach only had access to a part of it, where the timestamp was hidden.

It's hard to come to any conclusion about what was done with the data on this account.

We can even go as far as saying that the account never used the data but had access to it because it was part of a group of accounts with access to it.


I don't see why any of your reasons preclude analytics.


I said not "internal analytics". Not "internal". The end-customers who would be buying that aren't telcos. Like I said. They are the other (non-telco) emerging industries that Snowflake's blurb hints at.

e.g. a startup doing an Alternate credit scoring model isn't "internal analytics" wrt a telco.


If that's the case then they're probably more upset that they're not getting paid for this data than anything else.


Reading the articles about this breach and the nature of the data in this Snowflake lake, I personally wouldn’t consider this breach a “leak” from the customer perspective - to me the leak is upstream of this breach.

Given the nature of the data in the database and the platform it was stored in, it seems extremely likely this data was not meant to be used internally by AT&T but was instead meant to be used externally by either a 3rd party partner (like advertisers and consumer analytics partners) or a government agency.

In other words, if it were my data in this datastore, I’d consider my data as already having been “leaked” when it went into the store - the issue here appears to be that this data was “leaked” to the wrong people from the perspective of AT&T and the FBI.


That's the issue with dragnet data collection and Snowflake-esque databases - it's never safe to enter any personal information on the internet. Given enough time, any and all of it will be "shared" and used for a third party's financial/political gain.

Doesn't matter if it's AT&T, a bank, or the government. Never under any circumstances can you expect anything sensitive to stay private. This used to be taught as gospel when introducing kids to the internet - it's crazy how much things have changed in 20 years.


Given that most businesses and government agencies now allow remote access (i.e. WFH) all personal information is on the internet already.


I wonder how many times Snowflake has openly transmitted CP from ATT customers because they are too hungry to ingest and sell data rather than verify it.


AT&T stock has already bounced back from much of the initial -2.6% drop this morning, so the market thinks AT&T is immune. Meanwhile Snowflake is -3.9% down (they have many other customers than AT&T).

https://www.marketwatch.com/investing/stock/T

https://www.marketwatch.com/investing/stock/SNOW


I never got the impression that the market ever cares about data breaches. It seems most companies are rarely held financially responsible for data breaches anyway.

I would bet any effects you’re seeing in stocks is unrelated to this news.


I agree.

This is precisely why breaches keep happening and will keep happening. It cost money to implement security. There's no cost benefit to spending that time and money since there are no consequences.

Businesses do not spend money unless it will make them money or save them money.

There needs to be a hefty federal fine on a per-affected-user basis for data breaches. Also a federal fine for each day a breach is unreported.

That money should go into a pool which can be accessed by people who have their identity stolen.


Or a lawsuit go through where someone can win quite a bit from from data leaks. If each person affected sued and won 100k or so, or even 1k, AT&T would definitely be spending money on security.

But it appears $5 or credit monitoring from an agency that also gets hacked is sufficient for class action lawsuits.


That requires people to be rich enough to sue. It takes a lot of money and time to sue. Almost no one has enough resources to do this. The courts are not an effective way to implement this policy. Unless you only want rich people to be able to get justice.


110M people impacted = class action

The lawyers work on contingency


Class action suits regularly end up getting you "$5" worth of credit monitoring from the exact company who lost your data. It's a joke. Class action suits as they exist today in the US are an abject failure of justice.


If they end up with the company having to pay anything, it is greater than fines imposed by regulatory agencies… who should be doing this job.


showing damages is hard


Imagine the GDPR fine


Up to 4% of income. This is not the end of the world either.


And rich people usually do deals off-court. You will pay me this and we are ok. Because its faster and both sides know they capabilities usually.


Most companies now include clauses that force arbitration and prevent you from using a class action lawsuit. This type of sidestepping of the public justice system should be outlawed, retroactively, with retroactive lawsuits (by extending the statute of limitations), retroactive fines, and retroactive jail time.


“12 months free credit monitoring with auto-renewal”.


> It cost money to implement security.

Yes, but no amount of money will stop the data in a big database being stolen by someone sufficiently motivated to steal it. It's just bits on someone's disk.

The only true solution is to not create the database. But then what would all the data scientists and their MBA masters so with their time?


in this case it’s pretty tough because the phone company does need this metadata just to bill people. so they should protect it properly.


Its a interesting issue, its kinda of like software piracy, so what if someone steals the product, we will still make money on the product with the normal sale of the data in the first place. Its just making the news because it was a breach. It's not counted as a breach if the exact same party was to buy the data outright from ATT in the first place.


I don't see a reason as to recording who contacted who. If it's for billing, just record duration, if they're not an 'unlimited' customer and flags on whether it'd incur extra charges (i.e roaming, international call)


This is the kind of information that the end user may want.

OTOH this could be an opt in decision with a warning on the consequences


Most breaches are because of developper incompetence. Throwing money at it won't really help. You need better basic security skills.


No two people are incompetent in exactly the same way. Hiring two developers to review each other's code leads to better code because they will often find problems that the other one didn't see. In a well managed organization (admittedly not a trivial caveat these days), more people working on security leads to better security.


Certainly, but for instance no sane developer should concatenate a string in a sql query unless there is absolutely certainty the string is safe. This should be reflex, not a matter of money or time.


People are alway going to make bad decisions. Sometimes that is out of a lack of experience or knowledge which can be fixed by better training (which also requires money). Other times it is out of apathy, laziness, or something else that can't be easily fixed. Either way, time and money can provide extra sets of eyes to find and fix those mistakes before they lead to a breach.


Also, our defaults are opposite of safe (most of the languages are still mutable by default, rigorous type systems wildly unpopular, there is a straightforward way to concatenate strings inside a query etc), our disaster prevention tools and practices seem most often to be targeted at symptoms instead of the causes (god forbid we rethink our collective ways and create/adopt tools that are much harder to use incorrectly), and all of this keeps happening because there is no pressure for it stop. What’s the incentive to?

I don’t think that there is a room for a meaningful and honest discussion about individuals in these circumstances.


There is some evidence that it does hurt stock prices:

https://www.comparitech.com/blog/information-security/data-b...

"Stocks of breached companies on average underperformed the NASDAQ by -3.2% in the six months after a breach disclosure"

That said, it's not clear what the long term impact is on stock price (if there is any).


Unfortunately, that analysis seems to have made absolutely no attempt to check whether the results are statistically significant.

Pick 118 random companies at 118 random points in time. It's vanishingly unlikely that the average returns of that group will exactly track the NASDAQ returns over the following 60 days. It might underperform, or it might overperform. An underperformance of 3.2% could easily just be the result of random chance, and have nothing to do with data breaches.


My hypothesis would be that companies with poor operational practices are more likely to underperform the index and have data breaches - in other words, that the study confuses cause and effect.

This wouldn't be that hard to test. I suspect that the breached companies underperformed in the six months before the breach as well as the six months after.


Also, events which are not "just" data-leaks but also interruptions or degradation in regular operations. I suspect investors may be more sensitive to those events and their fallout, and such events more likely to either be caused by bad-practice or to be somehow connected to data-leaks.


Really should be up to the government to fine these companies and pay out to those effected to disincentivize lax security standards.


Well, I guess we devs should also be looking at ourselves, then. A lot of the lax security comes from us collectively choosing to build applications using cloud services that talk to each other over the public internet. That pretty much describes the so-called "modern data stack."


How would such damages be assessed or proven?


They would be assessed according to rules written by people who are skilled at writing such rules. The rules would be evaluated by looking at data over time and revised as needed by experts in the industry who are as neutral as possible, maybe with some feedback from the public. The courts exist for any contention regarding responsibility.


They are very much related to the news, that's precisely why I linked to the stock charts: AT&T was flat overnight but opened (9am ET) with a -2.6% spike down, but has been recovering since. Their press release appears to have been Friday 7am ET shortly before market open [https://about.att.com/story/2024/addressing-illegal-download...].

Also as corroboration here's MarketWatch: "AT&T’s stock slides 3% after company discloses hack of calls and texts" [https://www.marketwatch.com/story/at-ts-stock-slides-2-9-aft...]


I'm not saying there's no way the stock pullback wasn't caused by the hack, but it's also important to note that MarketWatch article only establishes correlation, not causation.


Most linked financial news is auto-generated and auto-correlated. Lots of "why did.." when nobody knows, and frankly there often is no why. Perhaps that was the day a retirement fund shifted money, who knows.

While this price movement is very well correlated, perhaps causal even, but marketwatch (and all similar bottom feeders that are just trying to make ad revenue), it's a case of a broken clock being right. Those financial news sites which link recent news to stocks, eg Yahoo, benzings, - those recent news headlines are just the same as ad tech now. It is noise.


The market correctly does not care because there is no consequence for the current or prior executives and no financial consequence for the company. All they will do is send out some obligatory notices, mention it in their investor relation materials, maybe offer a year of credit score monitoring, and move on.

We need regulations with massive fines, class action lawsuits (a ban on arbitration clauses), and maybe automatic minimum level compensation to those customers.


I think they will care a lot more when it directly impacts them. If all their text conversations were publicly available that would cause some outrage.


> I never got the impression that the market ever cares about data breaches. It seems most companies are rarely held financially responsible for data breaches anyway.

This might also explain why there's little visible effect on other cloud database services either. After all, the attack is pretty simple and potentially affects any cloud database that allows access from the Internet.


The market doesn't care precisely because there is never any accountability.


I'm certainly not going to defend negligence of data protection but it's extremely difficult to cost as a liability (naively, you might even consider it not a liability at all) without government oversight.


My reading is that the market thinks Snowflake takes the majority of the blame, and the content of the linked article seemed to suggest as much despite having only AT&T in the headline.


It's actually a great way to tell that it is known that the punishment is insufficient.


Insurance takes up a lot of the fallout from data breaches.


Well it’s as if you put your data in Salesforce and Salesforce got breached… maybe you’re bad at picking vendors but the real loss of trust would be on Salesforce.

In this case, Snowflake was also the cause for the Ticketmaster and Lending Tree breaches according to the article so…

real lack of trust in Snowflake now.


Snowflake is a platform. The lack of trust is in whoever configured Snowflake for AT&T

Credential rotation, SSO, PrivateLink or IP allowlists all should be used with PII.


its not an expensive problem and customers aren't going to go anywhere else

class action lawsuit just going to result in everyone’s $2 being given as a free trial of a ringtone addon from the early 2000s that converts into more recurring revenue


It’s priced in.


Over in Europe this blanket saving of phone records beyond what it is necessary to operate would have been illegal in many countries, and is in general incompatible with the European Convention for the Protection of Human Rights and Fundamental Freedoms outside of active threats to national security and temporary measures overseen by a court.[1]

There's really no reason why any service providers should save this stuff in the first place, and it isn't hard to fix with legislation. Just make it illegal to even keep.

[1] https://curia.europa.eu/juris/document/document.jsf?text=&do...


> Over in Europe this blanket saving of phone records beyond what it is necessary to operate would have been illegal in many countries,

On the contrary, many European countries have mandatory data retention periods that meet or exceed the 6 months of records that were supposedly included in this breech.

Germany has one of the shorter retention periods at 10 weeks, but they still have to keep those records.

Saying that it would be illegal to collect these records in Europe is patently false, and furthermore the record collection is generally mandated for a period of time that depends on the country.

> There's really no reason why any service providers should save this stuff in the first place,

Billing. You need phone records for billing purposes. You need to keep them for a while longer because people will dispute their bills all the time.


> Germany has one of the shorter retention periods at 10 weeks, but they still have to keep those records.

No they don't, because it's "suspended" by the federal network agency until courts are through with it. In fact they suspended it three days before the law would've come into force and thus it never was. The current state of affairs is this: the retention was ruled incompatible with German and European law in an injunction and it does not look like that is about to change.

There's a similar picture in many EU countries: There's a law on the books, but it can't be enforced/is being challenged/was already invalidated/is being rewritten/repeat.

Also note that to courts location data/phone records is a different issue than retaining information that merely associates an IP address with the subscriber that used it at some time (knowing which subscriber has what phone number is not an issue either, after all). The latter was ruled to be unproblematic by the ECJ just this year, while for the former the latest ruling is what I outlined earlier.

Besides Germany, some other countries that had data retention laws that were ruled unconstitutional are: Belgium, Bulgaria, Czech Republic, Cyprus, Romania, Slovenia, Slovakia.

In many other places that currently do have mandatory retention in force, it is being challenged.

> Saying that it would be illegal to collect these records in Europe is patently false

It is illegal to mandate in such a manner. There's a difference.

> Billing. You need phone records for billing purposes. You need to keep them for a while longer because people will dispute their bills all the time.

You must've not read the part where I said "beyond what is necessary to operate". Telekom for instance is doing just fine deleting phone records after 80 days - or within 7 days if you use a flat-rate and they're not relevant to billing.


I should add that if is not mandated, then it is illegal to do under GDPR and other privacy laws beyond what is necessary without obtaining explicit consent. Even if it was mandated, the telcos still could not do with the data as they please and forward it to another company like AT&T did.


> There's really no reason why any service providers should save this stuff

There are many reasons! Most of them are simply contrary to how folks think business should operate. Unfortunately the US seems to value "disruption" over "customer protection", so legally protecting data is unpopular on the hill.


I was under the impression that the government wasn't allowed to create a mandate that a telco has to save all phone records like that, but it doesn't stop a telco from doing it themselves. I think that would fall more under GDPR limitations?


I believe you are correct. That's what I was referring to with "illegal in many countries". Most judgements on this issue predate GDPR, but before GDPR, many countries already had similar laws and attitudes. For example article 2* and 10 of the German constitution protect personal data and communication, not just from others, but also from the government. Not unlike the GDPR.

Some service providers in Europe don't even want to save any data. The linked judgement above was the German state suing Telekom, which didn't want to save that data, and losing. Given the state of affairs, the question of "illegal or not" doesn't really come up as much. At least I'm not aware of any high profile judgements.

Besides Telekom, which always tried to minimize they data they keep to the point of fighting it all the way to Europe's highest courts, most other telcos don't really care and pick whichever middle-ground is available between "must" and "must not". Whatever is least-likely to get them into trouble. Right now that just happens to mean "save little".

* It's not stated explicitly in article 2, but the German constitutional court decided that it follows from those personal rights: https://en.wikipedia.org/wiki/Informational_self-determinati...


Historically we handled this with fiber taps at AT&T, as well as other ISPs. Some of them even knew about it.


How could they not know about it?


Easy, we installed them between their sites, before they were lit up.


You live in a place where the government is for the people, not for themselves.


If it wasn't for the courts and a decent de-facto "constitution" (collection of treaties really), governments would absolutely love to expand the amount of data they (police, spy apparatus, etc.) have access to. That they also try to reduce the amount of data companies are allowed to save for themselves is tangential.

The court case I linked is evidence of that. The German state wanted Telekom to save more data, but the telco refused and won in court.


What the NSA wants, the NSA gets. No legislation is needed when the system is working as intended.


According to the article, the data was being made available to other businesses... From the detail level involved, I imagine the NSA has some sweeter deal with telcos... And they have much richer data.


The NSA buys all of the data available from data brokers. 4A? What 4A? With telcos they have the extra advantage of ordering them around with an NSL.


For those not deeply versed in US federal regulations: Part 4a of Title 15 of the Code of Federal Regulations (CFR), which covers the "Classification, Declassification, and Public Availability of National Security Information" for the National Security Agency (NSA).

<https://www.ecfr.gov/current/title-15/subtitle-A/part-4a?toc...>


Not entirely sure, but I thought they were talking about the 4th amendment, which also is relevant. It prevents the government from spying on Americans without a warrant. The NSA works around it so openly by buying the spy data from third parties, and saying the 4th Amendment doesn’t apply since they didn’t collect the data themselves, so it’s fine. It’s a giant middle finger to the Constitution of the US.

https://en.m.wikipedia.org/wiki/Fourth_Amendment_to_the_Unit...


Possibly. And on reflection, perhaps more plausibly.

In either regard, unambiguous comments are preferable to ambiguous ones.

The principle function of speech or writing is to accurately convey one's own state of mind to others.


New lines of business. Another way for them to sell your data. The NSA is quaint. The Valley knows everything about everyone already, and even has their current GPS coordinates.


The NSA shouldn’t need the telcos to retain these records, just hand them over to the NSA to retain right?


It's not so much the NSA as various other government agencies. The NSA is hoovering everything up, but if the local cops call them and want access to it, the NSA is going to tell them that they're not even authorized to know whether or not the NSA has that information. Also, something something due process something something American citizens.

Whereas if they can get the telcos to keep it then the cops can get it using the third party doctrine. This is basically an end run around the constitution, which is why they like it.


Which leads me to wonder - were any of the NSA’s own employee, call and SMS records at AT&T part of the comprised data?

(edited for grammar)


Right, if phone records for Congressmen and known (or deduced) DOD were made public would that sway any changes


It's a good business decision to make others do your work.


Government is not a business!


USA government sure looks like a business from several angles.


> What the NSA wants, the NSA gets

The NSA’s power is in being boring and unnoticed. This could be a revenue rider.


Every txt and phone call, every email and letter sent to your address along with every utility bill (list goes on) has been saved since at least 1999/2000 to present day. People like Bernie went to jail because they pushed back and it was all because of this....

Just saying.


who's Bernie?


... letter?


This is probably a reference to US postal or mail covers.

The USPS takes images of most or all postal mail as part of its delivery and postal sorting/routing processes. Those covers are retained for a limited period of time, and actually have, so far as I understand, significant privacy protections associated with them, of the sort notably absent in most electronic communications.

See:

Mail Cover (Wikipedia):

Mail cover is a law enforcement investigative technique in which the United States Postal Service, acting at the request of a law enforcement agency, records information from the outside of letters and parcels before they are delivered and then sends the information to the agency that requested it.[1] The Postal Service grants mail cover surveillance requests for about 30 days and may extend them for up to 120 days.

<https://en.wikipedia.org/wiki/Mail_cover>

MICT: Mail Isolation Control and Tracking (Wikipedia):

[A]n imaging system employed by the United States Postal Service (USPS) that takes photographs of the exterior of every piece of mail that is processed in the United States.[1] The Postmaster General has stated that the system is primarily used for mail sorting,[2] though it also enables the USPS to retroactively track mail correspondence at the request of law enforcement.[2] It was created in the aftermath of the 2001 anthrax attacks that killed five people..

<https://en.wikipedia.org/wiki/Mail_Isolation_Control_and_Tra...>

39 CFR § 233.3 - Mail covers. <https://www.law.cornell.edu/cfr/text/39/233.3>


You can sign up to have them email you a daily summary of your mail deliveries including the associated images they've logged under USPS Informed Delivery.


Right, more info here: <https://www.usps.com/manage/informed-delivery.htm>

(I was ... vaguely aware of this.)


Anything you receive via post office. Sender/Receiver address is scanned. Post office uses OCR's for sortation and that information is captured.


Ah. The metadata. Inconsequential, then, to a degree.


"We Kill People Based on Metadata", ex-NSA chief General Michael Hayden:

<https://abcnews.go.com/blogs/headlines/2014/05/ex-nsa-chief-...>

As Bruce Schneier has noted, metadata equals surveillance, as it's actually far more amenable to analysis and inference than whole-text or audio capture. Though that latter may have shifted significantly with the rise of LLM AI techniques.

<https://www.schneier.com/blog/archives/2014/03/metadata_surv...>


Consumers are so numb to data breaches that these events now bring very little outrage. I think without that anger from the consumer, there's little incentive for companies to do more to stop data breaches from happening.


Well it's starting to feel like data privacy just doesn't exist anymore. I don't know why administrators for big customer databases even bother setting passwords these days.


My mother was concerned that some of her information, and mine, leaked because she signed up for another bank account from a place she decided she didn't trust. She said she wasn't worried about the money being stolen, but she was worried about our identities being stolen.

My concern was the complete opposite - I assume that my social security number and address are already for sale for a fraction of a cent somewhere, bundled with 10,000 other identities. But if money gets stolen, that's a whole rigamarole, with banks wringing their hands and saying "identity theft" as if that clears them from any responsibility.


Classic Mitchell and Webb skit[0]:

Bank: "No, you see it was your identity that they stole!"

Customer: "Well I don't know because I seem to have my identity whereas you seem to have lost several thousands of dollars. I'm not clear why you think it's my identity that was stolen rather than your money."

0: https://www.youtube.com/watch?v=CS9ptA3Ya9E


As a nobody, I keep wanting a financial product that is a black hole. Money can go in, but cannot come out without significant pain. Seven+ day waiting period, in person visit, physical mail verification, something, anything that means if I do get hacked my accounts are not drained in milliseconds.

When I need a legitimate large withdrawal, I can go through the required effort.


You can have a financial manager control your accounts for you and just keep a small checking account, (plus they'll help you grow your balances) but they're not free. Well, they're not free if you want them to be unbiased. Given, what's going to keep them from getting scammed? Maybe what you're looking for is several safe deposit boxes.


I still want my money invested into the economy. I just want Chase/Fidelity/etc to have an understanding that I am never going to withdraw money from these accounts without planning for it. So, “I” should never be authorized to drain the account at a moments notice without extensive approval. Anything to cause friction for would be scammers and only once-a-year (?) pain from me to triply confirm the money can move.


I don't have direct access to my long-term savings and retirement accounts— I have to go through my financial manager who'll works in a small, local firm, and so would anyone trying to impersonate me. He would probably recognize my voice, knows where I live and what's going on in my life, to whom I'm married, etc. because we have bi-annual check in meetings. He'd definitely contact me through his existing contact info if there was anything weird going on with one of my requests, especially if it involved a different address or account than he's used to dealing with. As anyone in that compliance-and-accuracy-focused line of work should be, he's very intent on making sure all of the Ts are crossed and Is are dotted. He charges a flat percentage of my modest retirement savings annually (I'm far behind most white collar workers my age, coming from a working class early adulthood) so he has a financial interest in my investments, and does a really solid job managing them. The accounts are in a large investment-focused bank which I believe only he can access. I think it's about as safe as you could get while still keeping your money active in the economy and not having a rich person's resources.


This already exists. Withdraw from account to physical cash. Proceed to stash cash in “secret” location.

Most businesses don’t even accept cash anymore. Can’t get “hacked” although it’s prone to many other issues — space, humidity, physical theft.


That sounds like the opposite of what OP wants, because that money can very easily come out, without any pain, and without you even being notified that it's been moved - unless you're re-implementing your own bank-level security, I guess.

For example, let's say you have $100k in savings. I think you would be absolutely bonkers to store that in some secret part of your (flammable! break-in-able!) house.

I guess you could put it in a safety deposit box, and if you needed to spend it in a non-cash way, you could walk it directly to the teller and deposit it and make it available? The equivalent of a cold wallet, I suppose.


> Most businesses don’t even accept cash anymore.

Really? I've been using cash almost exclusively for the past several months and haven't had any real problems. Sure, the overpriced hipster vegan Thai place in the McMall district may not take cash, but the family-owned ramen restaurant a couple miles down the road is more than happy to do so. Personally I find the "won't take cash" attribute to be a strong indicator that the business isn't worth supporting.


I've encountered nearly no businesses that don't accept cash and I pay with cash all the time. The lower-income end of the working class makes up a huge percentage of our economy, and it's an extremely cash-centric demographic. But even then, I've got a friend who sells fine handmade jewelry and some folks came in and bought like a 30k piece from her in cash because they owned a cash-only business. I can't imagine anyone existing outside of a ultra-gentrified corporate enclave that would encounter nearly any businesses that don't accept cash, let alone most. Maybe they just never see anyone use cash because they're not in a socioeconomic segment where it's still the standard?


If you have at least a fraud watch on your credit which means creditors are supposed to call you on the number they have listed before they open new accounts, then the money is arguably worth protecting more. But if you think it's tough to convince the bank with which you have an existing relationship that you didn't make some withdrawals, imagine trying to convince a bank you've never heard of that you didn't actually approve a loan for 3 Cadillac Escalade Platinums which neither you nor the bank realize are currently in a shipping container on their way to Abu Dabi.

(Nothing against Abu Dabi— I just picked a random place not under US jurisdiction where plenty of people have Escalade Platinum money.)


I often choose Abu Dhabi as an "example destination", because that's where Garfield kept mailing Nermal in the comics.


After Equifax debacle, I don’t think anyone cares. It’ll only be a big deal if there’s a huge B2B leak and business-critical data gets exposed, other than the usual name, address and phone number.


I'm still upset the government hasn't started work on a new national ID program after the Equifax breach. The SSN is not a suitable ID number in this day and age. We need something better that can withstand these kind of things without screwing people for life. My credit will be frozen for the rest of my life, and everyone else should do the same.


This is it for me tbh. Yeah I don't want my identity stolen and I'm still careful but after Equifax I just assume everyone already has my data so all of these data breaches are meaningless to me at this point. It sucks and it makes me mad but all I can do is shake my fist and wish these companies would be better anyway, so what else can I do but just be ok with it?


It's not that simple. This time, phone records and location data are stolen. These are more sensitive than the stolen data from typical data breaches.


AT&T is a public company. Public company needs to get fined appropriately.

Start issuing multi billion dollar fines for these breaches and suddenly companies are invested in security.

Unfortunately with government agencies getting defanged as part of recent SCOTUS ruling, it’s likely not possible.

Have to rely on civil court to issue fines now (ie, class action lawsuits).


I think many companies think they can solve this issue by throwing money at their cyber security teams. It just happens that cyber security teams are often ineffective.


It's hard for a CyberSecurity team to be effective when the Execs keep failing the phishing tests and IT does not have the authority to fire them for it.


I've seen this so many times. I've seen instances where the execs/managers demanded it was turned off for them, and it was. 75% of the security I've seen at companies is pure theater so they can check the boxes for their insurance.


Good security researchers easily command a $500,000 compensation package per year (cost to companies higher due to benefits like health insurance). When you show the market comp of good cyber security researchers to execs, suddenly they decide that they only have the budget to hire incompetent people.

Good cyber security people are expensive because they are highly skilled: they typically need to have been a software engineer to understand software architectures and have intuition about them, have spent significant time sharpening their skills at hacking by participating in CTFs, and have probably also spent significant time doing reverse engineering and have a few CVEs attributed to them. (Why are these skills needed? Because they are the skills needed by the red team. Every company that takes cyber security seriously will have a red team.) Now tell me whether these people are worth $500,000 per year.


Maybe this is how it is at some places, but in my experience, it is not the case. I have friends who have worked in cyber-security for Fortune 500 companies and almost all of those companies would short-change (or outright ignore) the recommended spend and suggestions of their cyber-security employees, contractors, and advisors.

Where are you getting your information from? The levels of security negligence I hear about aren't even a big ask. Huge companies neglect to do basic things like "don't store your passwords in plain text" or "make sure you salt and hash your passwords".

I don't think it's fair to say cyber security teams are failing if companies are blatantly doing the worst and most obviously wrong things on the daily at the highest levels.


How could they? Everything related to computers is designed to exfiltrate data nowadays.


And why didn't they do anything when we WERE angry?


I cancelled my AT&T account over 10 years ago, yet they still stored my (old) address, full name, and SSN in the previous hack in March.

The fact we don't have decent legislation to materially punish incompetent organizations is beyond absurd.


And earlier this year my ssn was on the dark web due to their leak (or vendor). One year of monitoring? No, I’m going to need it for life.

Security is not a concern. There is no real incentive to change the status quo. Make them pay for monitoring indefinitely .


I never understood the american secrecy about SSN... it should be a "username" not a "password"...

In my country you can calculate our own national id (mix of date of birth, autoincreasing number by each birth that day + 1 checksum number), and if you do/have any kind of personal business, your personal tax number has to be written everywhere, on every receipt you hand out or anything you buy as a business.

Somehow knowing that first boy born today will have an ID number of 120702450001X (too lazy to calculate the checksum, but the algorithm is public), doesn't help anyone with anyting bad.


It's because it happened gradually / naturally / semi un intentionally, because:

1) SSN was not intended as a national ID, but it so happened to fit the shape of one, in that almost everyone has one and they're unique.

2) It has never been possible to institute an intentional national ID system in the US for political reasons

That is the recipe for the problem we have now. Strong demand for a national ID from many business purposes, the existence of something that looks a lot like, but is an imperfect form of, national ID, and the refusal to create a proper national ID, has naturally led to a de facto system of abusing the SSN as a national ID and just kind of everyone being a little annoyed and sketched out about it but putting up with it anyway for lack of alternatives.

Incidentally, did you know anyone can generate a valid new EIN (which is a lot like an SSN, and can be used where an SSN can be used for some but not all purposes, specifically filing taxes and ) at this page https://www.irs.gov/businesses/small-businesses-self-employe... ? This isn't legal advice and I'm not a lawyer and I don't know in what situations you personally would be legally permitted to use this (it's meant for businesses, absolutely not some kind of personal alias) -- but technologically, it's just honor system, and anyone can certify they need and are entitled to a new EIN and the IRS web site will provide you with a new unique one. I don't think you even need a legal entity, since you don't need a legal entity to run a business in the US.


Also NAL, but watch out for how this is reported to states. California is currently $800/year min, even if the entity has no activity.


> Somehow knowing that first boy born today will have an ID number of 120702450001X

It's even worse. Only post-2011 IIRC births have an algoirthmic SSN. So everyone over the age of 13 still has old fashioned sequential SSNs, where XXX-YY-ZZZZ is determined by

1) XXX is the code for the office that issues your card. Can be guessed precisely and accurately by knowing birth location. For example, I can guess what region of the US you were born in (or lived in when you immigrated) by the first digit. 0 or 1 is probably northeast. 4 or 5 is probably near Texas. 7 might be near Arkansas. Etc.

2) YY-ZZZZ is sequential by date! So by knowing just birth day, can be guessed to within a range. In practice, this means it's easy to guess YY alone, but harder to get all 4 digits of ZZZZ

3) For some stupid reason it got popular to print SSNs with all but the last four digits masked. This is horribly bad because those four are ACTUALLY THE MOST SECRET PART! It's the only part that might not be guessable. But since it's common to be more lax with securing them..... it is super easy to recover the full SSN if you find a piece of paper that says something like

JOHN SMITH

123 Main St

Alabama City, AL 76543

In ref acct: XXX-XX-1234 (2001-03-14)

Dear Mr Smith,

Your account is overdrawn. Have a nice day.

Thinking of you,

The Bank

It also means if someone is personally known to me, even vaguely, I may be able to reconstruct their social seeing nothing but a scrap of paper that has just the last four, if I can guess approximately where and when they were born or first entered the US. If I'm in a situation where I can try several guesses, it's even easier.


> 1) XXX is the code for the office that issues your card. Can be guessed precisely and accurately by knowing birth location.

While the first sentence is true, the second is only true if you were born after the mid-1980s, when a Reagan-era tax reform was enacted. (It required a SSN when claiming dependents.) Prior to that, most people did not get a SSN until they got a job.


I looked this up and while your first sentence is true, the second (non-parenthetical) sentence is only true if you did not require any of the other services that required a SSN. There's a list of those under "Exhibit 2" (about 2/3 of the way down the page) on the SSA's website:

https://www.ssa.gov/policy/docs/ssb/v69n2/v69n2p55.html

tl;dr: If you had a bank account, applied for a federal benefit, were on food stamps, applied for school lunch, or did any number of other financial or government transactions, you needed a SSN starting in the 1970s. That's enough of an incentive that many parents might've just applied at birth, figuring that their kid will eventually need it. Also everyone born 1968-1981 would've likely gotten one in 1986, when the change you mentioned about dependents was enacted, and then after 1988 they started being required for issuance of a birth certificate.


I stand corrected. Thanks. I didn't bother to look it up, since I'm old and got mine when I started working. Although people born 1968-1981 were getting SSNs where they currently lived, which is not necessarily where they were born; which was the original point.


When I was in school (almost 20 years ago) this came up because someone mentioned the first 6 digits of their SSN and they matched mine. Since then it's similarly bothered the hell out of me that the practice is to mask all but the last 4 of the SSN and that a lot of places require you to enter your last 4 of your SSN.

I didn't know the reasons for the matches but them being my age and likely born in the same place as me made me realize those were identifiers and the last 4 were the unique bit.


A lot of financial things in the US are “secured” or anchored by SSN, that’s the only reason why. That and mother’s maiden name and first vacation and other security questions. It’d be less important with MFA now but SSN is also needed when opening new credit, so having it allows you to pretty easily fake someone else’s identity for credit. KYC hasn’t removed it from the equation.


One mitigation is to make your mother's maiden name the output of:

    head -c 20 /dev/random | base64
And keep track of the result in your favorite password manager.

Fortunately, fewer and fewer orgs are using security questions, but there are still some important ones that only use that and no MFA.


The problem with that plan is social engineering attacks. CSRs are often careless and will accept 'a bunch of random letters and numbers' as the answer rather than validating each character.

Better to randomly select a long dictionary word or hypenate a few together. Equally unguessable but easily verified, so it won't be weakened during a phone conversation.


"Mother's maiden name" won't work for my kids - my wife kept her name and the kids' last name is hyphenated, so you just have to guess whose name we put first.


It's also probably increasing easy to look up.

We need a national (preferably RFID-ish) password system.


This comment pops up every time someone talks about social security numbers. Yes, they were never supposed to be private, but now they are. So either Congress can do something about it, or big companies can stop leaking them. Clever "well, actually"s didn't stop my identity from being stolen recently after a breach, and they never will.


They're not really private+, and nobody should design a system with the assumption that they are. afaik nobody does these days. There are extra authentication checks done in addition to simply "I have the SSN".

+ e.g. until very recently there were US states that used your SSN as your driver license number.


I never understood the american secrecy about SSN... it should be a "username" not a "password"...

The problem is banks/financial services do a piss-poor job validating identity when issuing credit/opening accounts. "Oh, you provided an address, a SSN, and [non-random, easily discoverable personal fact]! Sure, here's a CC with a $150k limit!"

It's not the leak that's the problem; it's the ease with which that leaked data is used to either obtain fraudulent credit or access accounts.

I don't have a good answer, because at some point, a financial institution needs to trust people to do business. Customer loses their phone, so MFA doesn't work, ok, now what? I guess the customer needs to have one-time use recovery tokens saved somewhere that can't be lost? How many people do that (not nearly enough)? How many banks even issue those tokens? And what if the token store gets hacked? Now you're really fucked.


> Customer loses their phone, so MFA doesn't work, ok, now what? I guess the customer needs to have one-time use recovery tokens saved somewhere that can't be lost? How many people do that (not nearly enough)? How many banks even issue those tokens? And what if the token store gets hacked? Now you're really fucked.

In my experience with banking in Brazil and Sweden this is easily solved with a OTP device you get from your bank.

Brazilian banks before that used to provide a card of 50-100 tokens you'd use for authenticating, which is obviously dangerous as people would carry them in their wallets with their cards (and associated banking details). Since the early 2010s banks have instead provided a physical OTP generator that you associate with your account.

In Sweden if I lose access to my phone with my digital identification app (BankID) I can fall back to my hardware OTP generator to login into my account, and authorise a new BankID installation in case I need a new phone.

It's a solved problem, even though the US developed a lot of the tech industry it feels like digital infrastructure is still in the late 90s for a lot of stuff; banking is a clear case, and government systems are another good example, e.g.: the DHS website for visa application is atrocious, we are in 2024 and applying for a visa feels like an experience from when I navigated the web on Netscape in the early 2000s.


Totally agree. It feels like our banking is a decade behind - like transfer money - no direct way to do it between banks - most people use Venmo. Some banks are part of Zelle, but I’ve heard it has fraud issues (weak discovery/confirmation of correct recipient) and the banks won’t refund many fraudulent transfers (“You initiated the transfer! Not our problem you sent to the wrong person!”).

So, do you get a physical OTP generator for every financial institution? I guess that works, but that would mean I’d have a drawer full (2x bank, 1x work, current 401k, past IRA, and a brokerage account - x2 because my wife has about the same).

I was thrilled last year when I discovered I could renew my passport online! In 2023! That should have been available eons ago.


Drawer? I would say bank lockbox but that seems like a chicken and the egg problem . It’s not entirely solved sounds like.


SSN is too public for it to be private or secret. Multiple employers, schools, medical institutions, financial institutions all ask for it, so it's not private.

It's also treated as evidence of who you are, but it isn't tied to identification like an ID is. These institutions use it without ever truly validating it.

It's similar to how records fraud can occur - people can record anything to the local registrar office, including fraudulent documents, without any checks. Once it's registered, it becomes evidence against the real owner. It's really messed up.


Even the US gov't gave up on the notion the SSN was not to be used as an identifier. My dad's SS card had a phrase printed on it saying so. My SS card did not have that text.


My SS card has that text. I got into an argument at the DMV when they asked for it. I relented because I needed my drivers license.

Congress could solve this by enacting a simple law. Something to the effect of SSNs shall not be used as a means of identification by any party, governmental or otherwise other than the Social Security Administration. Use of an SSN as identification shall be subject to a $100 fine per each SSN used as identification, per day.


SSN might be the least of the problems in some cases in terms of the info leaked...

What about people who have called suicide helplines, abortion clinics, loan servicing, etc...

With the numbers available, that will be possible to find out...


When I went to college in the late 80s my ssn was automatically used as my student id. When I got my first bank account in 1990, they used my ssn as the account number.


Our class grades with names snd SSNs were posted on the wall after exams in a list of hundreds of students.

Go Jackets.


Ah it was a different time. Societal trust was greater. Without global internetification, the only people who could ever have any opportunity to exploit this information were your fellow campus denizens (students, professors, etc).

Without global internetification, there was not as much an average person could really do or would know to do with an SSN alone to exploit it.

This story is a good parable for so much of what has changed in the world the last couple decades -- we had a world built for less globalization, then we globalized, and we've been gradually adapting to / dealing with the unintended consequences since then.

A real life door can only be picked by your neighbors or anyone else nearby -- attack surface is limited by the nature of physical distance.

A virtual door can be picked at by 7 billion people.


I wonder if the schools actually verified the SSN.

Would have been dank to see 666-66-6666 next to your name


My first big employer in the aughts had my SSN encoded in a bar code on the back of my company ID, which they expected us to display at the office.


It’s okay, will no longer be problem after Social Security Admin itself fails in next decade for being unsustainable


Why would that happen?

(Payouts are expected to drop in about ten years if no action is taken, but that doesn’t render the SSA irrelevant or cause it to suddenly collapse and shut down, so I assume you mean something else)


The TechCrunch article indicates cell site identifiers were included, which means approximate location as well.

https://techcrunch.com/2024/07/12/att-phone-records-stolen-d...


So where/what is my compensation? (I know there is no recourse).

When no one is on the hook for secure practices, like enabling MFA on your effin data stores that contain massive amounts of customer PII, this is the result. Not even an apology, just report it and move on. woops! those gosh darned cyber criminals.


If you go to court and ask for compensation you would likely be asked to show harm. Could you?


It really doesn’t matter. Compensation has been dispensed to customers in data breaches such as credit/ssn info, no harm proof needed. Potential for harm is enough. Breach of contract, as a customer do I have a reasonable expectation that this data is not exposed? of course I do. No one could very seriously argue it’s a zero sum.


Is there no harm, or is there harm that is hard to show in court?


A bit of both.

Most people aren't going to have their identity stolen (or insert w/e crime). Those that do will have trouble proving it was from this leak.


I've received checks over the years for various things like this. You end up having to fill out a claim form and then wait about 5 years and one day, you get this check in the mail for some tiny amount of money.


The real problem is that data needs to be deleted over time. There is not much of a use case for customers for go back last year and see who called them and obviously there are use cases like criminal investigations or spying. But customer has no power or ability to dictate how long their records are store and how they are used. Companies should provide tools and features to their customers empowering them with their data.


Non-murder criminal offenses typically have very short statutes of limitations.

A lot of this could also be solved by encouraging the federal government to enforce federal privacy law as written more aggressively. A good incentive would be to amend the privacy statutes to permit the FTC to keep the funds extracted from settlements and penalties in-house. This would allow them to increase staffing and create a positive feedback loop to deter wrongdoing. This would have a negative effect on incumbent companies and practices, but it would not take long for the message to get across and for practices to change accordingly.

Congress tends to prefer keeping agencies on its own budgetary string which paradoxically limits what the agencies are capable of doing. The laws that we think protect us do not protect us because many of them are within the exclusive jurisdiction of a federal agency with very limited powers and funds. In the US the leadership likes to create the illusion that it has made "Bad Problem" illegal by writing it into the law, but it does not like creating the conditions in which "Bad Problem" could be solved, whether it's because the tradeoffs involved are tough to contemplate or because keeping "Bad Problem" around as a visible enemy is clever politics.


> Non-murder criminal offenses typically have very short statutes of limitations.

There's a hidden assumption here. The expectation is that data retention and potential privacy violations are a necessary evil because anyone may later be under investigation for a crime. The data could go uncollected, it isn't AT&Ts job to retain private information on all of us just in case an investigator wants it.

Take telecoms out of it and consider a convenience store. Police would like to have video recordings of whatever moment in time they are investigating, but that doesn't mean the video has to be recorded and retained. A shop owner can choose to record videos and only retain them for a week if they want, or they can have cameras installed but not even recording if they're okay with just the effect of deterrence.


Many civil claims have short statutes of limitation as well. It's not really that good for these companies to maintain regular business records going back to infinity that are subject to discovery in disputes that are not even related to anything the telecom company did. Complying with the discovery requests and subpoenas is expensive. The fetish for the somewhat imagined benefits of big data creates open-ended liabilities for these companies. But the pressure that law enforcement and the spy agencies put on the telecom companies to facilitate this has been an open secret for a long time now.

A lot of this is on the federal government and Congress for leaving an area in which it has power dormant and within its relatively exclusive control. Thanks for the conversation.


That's another bandaid. The root cause is customer data collection mandated by outdated regulation. People should be able to digitally sign or provide a public key for their personal information without providing the raw text to 3rd parties. Various 1970's style government tax and regulatory rules need to be updated as well.


They have a financial incentive to never delete your data. Storing old data forever creates a perfect paper trail to sell to advertisers and perfect the shadow profile they keep on all of us.

I agree that deleting all your data after a year makes sense practically, but they'll never do it because it makes them too much money to keep it around.


This isn't data for serving user needs, this is data for spying on users


This is the kind of breach that really should be company-ending, but will sadly instead likely result in a slap on the wrist.

It is high time for the US to have a privacy law with real teeth, and to enforce it with vigour.


Or maybe it's time to turn software engineering into an actual engineering profession. If the people responsible for designing and maintaining the AT&T system were "real" engineers, they could be sued for malpractice or even lose their license to practice.


The root cause is not whether engineers are licensed (I'm fine with that idea, but it's not going to resolve this specific problem). Instead, it is a culture of not caring about security because the fines are a cost of doing business is, and which comes from management, and treating personal information as an asset instead of a liability.

A Sarbanes-Oxley style law that makes the CEO personally criminally responsible for breaches will be vastly more effective than pursuing individual engineers - many of whom will be on the types of visa where they have no effective route of pushback on orders anyway.


When a doctor is negligent, their employer is often also sued if it can be shown that it knew shenanigans were underway and did nothing.

We shouldn't choose between holding engineers or executives responsible. Each should be held responsible for their part.


Indeed - but we should start at the place likely to actually make a difference: the executives.


Snowflake still works though. What civil engineer has been sued because somebody jumped off their bridge? You get sued when the bridge collapses not when somebody uses it for an unintended action.


Do you really think that requiring 4-year degrees and passing a licensing exam would make a big difference? The fact is that, outside of civil engineering which involves a lot of dealing with regulatory agencies, most engineers in the US don't have PEs. I started on the path to get one because, had I stayed on my initial career path, I'd have been sending blueprints etc. to regulatory agencies but I ended up changing careers.


No, what will make the difference is being personally liable for the vulnerabilities you introduce.

Not the company. You.


How many individual engineers do you suppose get prosecuted for making errors--even careless ones? I'm guessing very few in the West. And I'm not even sure lopping off a head here and there to encourage the others is even a good idea.


> How many individual engineers do you suppose get prosecuted for making errors--even careless ones?

Not many but is that because they don't get sued or because professionals who face consequences for negligence make fewer stupid decisions?


I would assume that engineers, at least in the US, are far more concerned about getting fired/eased out than prosecuted if they do stupid things given that companies can do so pretty easily.


Would you say the same is true for a lawyer? Are they more worried about being fired from a law firm than being sued for malpractice and being disbarred? If not, why would engineers be different?


I would assume that being disbarred has a pretty high standard of misconduct as opposed to simply not making partner or whatever level of action makes maintaining employment at a large law firm practical.


Look at Sarbanes-Oxley for precedent. Management has to be made liable for sufficient cultural shift to occur.


Class-action suit sounds reasonable, but sadly those never give penalties in right ballpark. Here it should be hundreds to thousands at least per affected customer.

But my guess it is few tens of cents, if that... While lawyer will get nice couple million pop...


This happened in 2022 and they're just disclosing it now? Or did they just find out about it, which is maybe even worse?


The authorities requested the delay of the disclosure: https://cbs58.com/news/nearly-all-at-t-cell-customers-call-a...


The data was from 2022. The breach was from april of this year.


Who was the data being kept for?



ATT did not answer this question. I would expect them to keep phone records going back a ways, but 2022 seems pretty far. I'd guess for law enforcement.


I think there is a requirement to keep them 18 months. Any reasons to keep them in bulk for longer than that are probably bad.


How has Snowflake felt ANY recourse for being the source of all of these hacks?


The Mandiant report said that some Snowflake customers declined to use MFA AND had passwords in place for 4+ years[1]. Maybe Snowflake should have pushed for MFA harder but at the end of the day, this is AT&T's fault.

[1] https://cloud.google.com/blog/topics/threat-intelligence/unc...


I'd say the blame lies halfway between AT&T and Snowflake. If you let your customers have poor security practices, and you have the power to ensure a heightened security level, you're also partly to blame...


Snowflake also made it hard to have good practices, giving them further culpability. There was no setting for customers to force their entire tenant to enforce MFA. Customers had to depend on each person with access to do the right thing, something that is unlikely to be universally true.


Non-expiring passwords is probably no more or less secure, unless you are a rampantly terrible employer known for setting ablaze every bridge ever to the point of atomic annihilation.


Are you suggesting a disgruntled former employee could use the password and do things? At that point, I have questions. How is the former employee accessing the cloud service? If your cloud is allowing public access without a VPN, then you've done something wrong there. If the former employee is still accessing your VPN, again, you've done something wrong. Many other things still come to mind but point back to you well before password rotation rules.


Yeah. I agree. We have a strong offboarding process as well. But other employers? I mean. I’ve seen some shit in my day.


its not Snowflake's fault their customers used weak passwords and no MFA. Not enforcing MFA does merit some blame on Snowflake, however, I still think its on the customer to secure your own environment.


I feel like this would be true if ONE customer was hacked. At this point it's more than a handful. AND snowflake knew about it.

If all the lockboxes in a bank get broken into, is it respectable to say "ah all of the customers should have used better locks"? The bank is the party who is supposed to be giving the insight into secure storage. They're not just renting space.


Totally, way too many people are trying to blame snowflake.

ATT is a technology infrastructure company. Secure transmission of data is one of their core business competencies (theoretically). They are a corporation that we trust to handle incredibly sensitive info. Call records are, in fact, incredibly sensitive data.

They should be telling Snowflake what best practices to be using, not the other way around!


> Totally, way too many people are trying to blame snowflake.

Well the _actual_ compromise started from one of their employees, so it's pretty unsurprising that they're getting (some of) the blame.


Ahh. The linked article didn't have that detail.

They attributed it to a lack of 2FA


AT&T and phone carriers in general are not technology companies. They are infrastructure companies that purchase off-the-shelf communication technology, slap a billing system on top, and then spend most of their time on operations (finding places to put towers, keeping the gear up and running) and marketing. The security component of communications isn't built by them, but by the equipment manufacturers that they purchase from. There are no strong penalties for involuntary data leaks - why would they do more?


ATT has a rich history of being a technology company. They invented UNIX! That's in the past, fair enough.

So they used to develop cutting edge technology, they sell technology, they buy technology, they operate technology, they work with manufacturers to develop new technology, they operate the infrastructure underpinning the modern technology economy, but they aren't a technology company?

Even if you want to argue that they aren't a technology company, they sure spend enough time doing everything a technology company does to hold them accountable for their technology failures.


> They invented UNIX!

They also invented the transistor, C, the photovoltaic cell, radio astronomy, and … the telephone. ;)

Yes that’s the past, but AT&T labs still employs almost two thousand people. It’s very funny to try to claim AT&T isn’t a technology company and only peddles services on top of equipment made by others.


The company called AT&T now and the company called AT&T that invented Unix have really nothing in common but a thin stretch of history by now. The technology development units of AT&T were split off into Lucent a long time ago.

Calling AT&T a tech company because they operate technological infrastructure is like calling Spirit Airlines an aerospace technology company because they operate jet airplanes.


> The security component of communications isn’t built by them

Are you claiming AT&T outsourced security and have contracts to back that up? Buying security equipment surely doesn’t amount to having security, that would be hilariously naïve. Equipment manufactures are not responsible for AT&T’s data security, AT&T is. There are laws around security that can hold AT&T liable, in the US and Europe and elsewhere. Whether they will hold the company liable is another question, but these laws will not accept an excuse that AT&T purchased security equipment from another company.


I claim that these companies do not have a particularly high amount of in-house infosec know-how and outsource a lot of it, not necessarily just in terms of buying equipment, but also the service component of how to set up business practices in a secure way. It doesn't absolve them of their failures but I'm no less surprised in AT&T failing to protect data than I would be McDonald's.


It’s unclear what you’re arguing. That AT&T isn’t capable of securing customer data, and we shouldn’t expect that of them? That they shouldn’t be held liable?

If they don’t have the core competency, they need to obtain it as a requirement of doing business.


AT&T is a real-estate company that coincidentally sells telecommunications services. My wife used to work for them and given what she's told me I would never in a million years do any business with them intentionally.


Snowflake is saying they knew of unusual activity "around mid-April 2024", confirmed "May 23, 2024", around which time they made MFA mandatory (although their customer AT&T say they knew of the breach "Mar 20"; these timelines keep shifting back):

"Mandatory MFA option unveiled by Snowflake" - Jul 11, 2024 https://www.scmagazine.com/brief/mandatory-mfa-option-unveil...

> "US cloud storage firm Snowflake has already required the implementation of multi-factor authentication across all user accounts a month following the widespread breach of customer accounts, including those of Ticketmaster and Santander Bank, reports The Register."


It's not mandatory, I still have Snowflake user accounts that don't use MFA.


"Mandatory MFA option unveiled by Snowflake" sounds like they made it an option for an organization to decide to make MFA mandatory within that organization. But that conflicts with TheRegister headline - Snowflake's PR machine seems to be in overdrive.


It's industry standard to enforce MFA for customers of such sensitive data though. There's always going to be weak links.


Right. Snowflake facilitated AT&T'S abject negligence, but ultimately the buck stops with AT&T, here.


> Snowflake blamed the data thefts on its customers for not using multi-factor authentication to secure their Snowflake accounts


The dark web and info stealing malware are the source of the hacks.

My worry is not only that consumers get numb to breaches, but they consume rampant misinformation and have no idea how to hold appropriate parties accountable.

How many times have you held AWS accountable for stolen access keys?

Was it AWS fault when rabbit leaked their own keys?

Is it snowflakes fault when you lose your creds to infostealing malware?

How should snowflake enforce mfa on machine service account credentials?

The answers are no, no, and they can not possibly. Not even hyperscalers have this magic.


Eh, iirc the source of the hack was just regular stealers like Redline, not "the dark web".

It was actually Snowflakes fault.

The threat actors were able to find a test/demo account they could log into and from there they were able to access prod things they shouldnt have.


This is exactly the kind of comment I'm talking about. You have not read anything from snowflake, mandiant or crowdstrike on this, and you haven't even read the cnn article that has snowflakes response on this. The snowflake demo account has nothing to do with it.


No, its what happened 100%. Funnily enough, its YOU who hasnt read anything.

https://cloud.google.com/blog/topics/threat-intelligence/unc...

"In April 2024, Mandiant received threat intelligence on database records that were subsequently determined to have originated from a victim’s Snowflake instance. Mandiant notified the victim, who then engaged Mandiant to investigate suspected data theft involving their Snowflake instance. During this investigation, Mandiant determined that the organization’s Snowflake instance had been compromised by a threat actor using credentials previously stolen via infostealer malware. The threat actor used these stolen credentials to access the customer’s Snowflake instance and ultimately exfiltrate valuable data. At the time of the compromise, the account did not have multi-factor authentication (MFA) enabled."

https://www.symmetry-systems.com/blog/what-we-know-so-far-ab...

"Snowflake has confirmed that a threat actor obtained credentials of a single former employee and accessed demo accounts they had access to. Snowflake asserts these accounts contained no “sensitive” data and were isolated from production and corporate systems. However, unlike Snowflake’s core systems, which are protected by Okta and Multi-Factor Authentication (MFA), these dormant demo accounts lacked such safeguards. "


Key point of the article:

"Snowflake allows its corporate customers, like tech companies and telcos, to analyze huge amounts of customer data in the cloud. It’s not clear for what reason AT&T was storing customer data in Snowflake, and the spokesperson would not say."

Finally journalists are asking the question why customer data must be stored with third party cloud providers. AT&T is a long way from Bell Labs, shame on them.


All companies use third party cloud providers. A lot of legacy companies have been shutting down data centers to move to the cloud. So there isn’t a question of whether why your data is in the cloud. It’s going to be in the cloud.


And honestly, I think I'd rather trust cloud providers with the data than the remnants of a decimated IT team in a large enterprise that's struggling to maintain their own on-prem infrastructure that's super old and probably not up to date on patches.


The problem is then you have even fewer technically-competent people internally to actually manage the cloud, and combined with AWS's many documented footguns it's not clear to me the "new normal" is actually any better for security.

You go from being a potentially-small-fry target to getting your data collated in massive breaches. There's risks to both.


That’s the thing though - this was a snowflake breach. It’s not an AT&T miss because of their decimated sw engineering teams. Snowflake has much better sw engineering than AT&T.


> this was a snowflake breach

AT&T was not using MFA, while it was possible. Someone leaked credentials and this is the result. Only thing Snowflake could have done was to force MFA for everyone.


They added a feature recently to make it easy to force mfa


This data will be a gold mine for scammers. When they know relationships and real names of people they can target people as well create specific attacks for different people. Now with what LLM's are capable of mass social engineering is possible.


The root cause (1) is the data store should not have been available on the underlay network. Anything connected to an underlay network is a ticking time bomb.

Any servers or admins which need to talk to the data store should instead use a private overlay (2) network.

Any users (likely just remote admins) should do the same.

(1) Same root cause as 99% of breaches and yet it is too often swept under the rug while we focus on the infinite # of proximate causes

(2) Software, not private circuits.


It seems from the article that AT&T uploaded data to a cloud service, protected by username and password, and someone obtained credentials or breached the cloud service.

What does that have to do with 'underlay networks' and wow is that "the root cause of 99% of breaches"?


I doubt they "breeched the cloud service" provider. They almost certainly exploited no 2fa controls on the clients access via the clients network, which is what GP was saying. If you're on a businesses network it's too easy to get at their cloud storage or dbs because they should be on a secure overlay network.


OP is using weird terminology. It would probably be clearer to say "Anything connected to the Internet is a ticking time bomb. Any servers or admins which need to talk to the database should instead use a VPN." which indeed was best practice until recently.


> indeed was best practice until recently

But we should remember why it's not always considered best practices... you shouldn't assume that your private network is any more secure than the public network. When you have too many devices attached to that private (overlay?) network, it can be at just as much risk as if it was on the public internet. So, the zero-trust model is that you don't trust anything... public... private... it should all be untrusted.

Given that this was a "third-party cloud provider", I'm assuming that it was a credential leak and they only have username/password protections. Moreover, I doubt you'd have been able to add the provider's DB to an ATT based private VPN/network.


yep was trying to avoid word which carry varying connotations, e.g. vpn or zero trust.

zero implicit trust is likely the best term? you have to trust something, but enforce (and therefore trust) strong (not network based) identity, authN and authZ. this can be done anywhere via a software-only overlay.

a litmus test is server iptables (to use an example) looks like: iptables -P INPUT DROP iptables -P FORWARD DROP

and the only route outbound from the server is to the private overlay on one port, and that server still can't make those connections unless it is strongly identified and authenticated, and the overlay will not connect the client and server unless they are both authorized to communicate for that particular service(1)

(1)so for example if there is a zero day causing the 'server' to try to communicate with some_IP then the private overlay will not accept the connection, even though it is coming from the server


For highly secured services, I completely see the rationale for a private overlayed network. Tailscale, et al are great for this, where you're only exposing services to members of the private network. The problems start when people make the assumption that the private network is a secured network.

I don't think any of this would have mattered to ATT, as the breach was from a third party that wouldn't have been on a private network anyway.

But, that would be a great service bonus -- only being able to connect to a service via a user-configurable private overlay network. It would be nice, but highly impractical... I can't even begin thinking about how customer support would be able to handle a scheme like this.


Companies worked that way for decades. Everything was on the corporate network which was only accessible in an office or via VPN.


Sorry, I was trying to refer to creating overly VPN networks with vendors. So in the ATT case, it would mean their DB vendor (I’m assuming) creating separate VPN networks for each of their customers to connect through (in addition to username/pass credentials). The logistics of managing separate VPNs for each customer, for each user account, etc seems overwhelming.

For more traditional single-entity networks, you’re right. But with more and more BYOD, those networks are at a higher risk than they used to be. That’s the reason for the shift… VPN tech is still sound, but it requires that you trust the devices that are connected to it.

If you’re now also trying to trust devices from your company and your customers, that’s harder to work my head around.


An attacker who gets username/pw still can't get on the overlay network (the overlay requires credentials which can't easily be stolen or compromised, e.g. a private key signed X.509 certificate).

Yes, because 99% of attacks use the underlay network to access the target and exfiltrate the data. Said the other way, an attacker didn't physically walk into a Snowflake data center, console into the right server, and walk out with all the data.


That sounds more like the lack of certificate-based authentication (or some other stronger authentication method) was the problem, not the lack of a private overlay network.

After all, plenty of private overlay networks use simple username/password auth or no auth at all.


Agree, good point, the overlay needs to do strong identity, authN, authZ.

The critical part the overlay adds to traditional auth is making the server unreachable from the underlay networks, reducing attack surface by billions. Meaning:

+ Let's say the server did have good auth, but there was a bug, misconfig, zero day, etc. (one of the myriads of proximate causes).

+ Since the server is available on the underlay network, that vulnerability can be exploited by anyone on the underlay (billions Internet nodes).

+ In contrast, making the server only available on the overlay, reduces the attack surface from billions of Internet nodes to the nodes which can ID, authN and authZ (for that particular server) on the overlay.


Software, not private circuits

If only AT&T had some kind of way for its computers to talk to one another without going over the public internet…


What? Has anyone published an RCA that confirms this? Is this how the data was ex filtrated from Snowflake? Or did ATT’s Snowflake credentials leak?


Freeze your credit people! It's super easy. It's not a perfect fix but it's so trivial to do and it will help.

https://www.usa.gov/credit-freeze

You can unfreeze through an app whenever you want/need to.


Is there any reason not to keep credit frozen permanently, only unfreezing it when you're making a large purchase that requires it?


This is how I have operated ever since the Equifax breach. Once that happened, none of the others seemed to matter, everything important for identity theft is out there.

I've had no problems. Someone will try to run my credit, it will fail, then I ask which one they're trying to use, and I unfreeze it for a day. Some of them have the option to unfreeze for a single pull with a 1 time code (if I remember correctly), but when I tried to use that the person trying to pull the report seemed clueless, so I had to do the 1 day unfreeze.


Credit is a weird ad-how system.

At some point, I wonder if folks will realize that having an unfrozen credit report is a sign of imprudence.


Unfortunately it isn’t an option in every country. In the U.S., you can freeze your credit for free, but in the UK, you can’t. I think we should get rid of the CRAs entirely, but that’s a conversation for another day.


One interesting thing I ran into with frozen credit, is that you cannot sign up for USPS informed delivery without them running your credit as a method of address verification IIRC. If it is frozen the process gets stuck in limbo (at least it did many years ago when I ran into this situation)


This is no longer the case. I signed up for Informed Delivery last year with frozen credit with no issues.


I open credit cards for the bonuses frequently enough that freezing my credit would be more inconvenience than it’s worth.

Also, all the big bank websites seem to offer real time credit history monitoring for free, so I am betting I’ll just deal with any problem if/when they happen.


Keeping your credit frozen permanently is a great idea. Some of the credit agencies even encourage this with features such as a temporary unfreeze of your credit for a few days/weeks and then back to the permanently frozen state.


That's what I do. It also slows my roll. It's an extra step I have to take before making that large purchase or applying for anything that requires a credit check.


It's an extra step, but a surprisingly simple one. When I opened a checking account recently the bank told me which credit agency they'd use, and I unfroze that account and ChexSystems (another credit agency you should freeze with that is used specifically for new bank accounts) in five minutes using their automated systems. You can supply a re-freeze date when unfreezing as well so you don't need to remember to do that manually once you're approved.


Yep. This is what I did after the first Experian data breach, for peace of mind. I am probably financially lucky enough that I don't need to constantly be checking or using my credit... but honestly it seems like this is what everyone needs to be doing.


As someone else mentioned, some authentication schemes require your credit to be unfrozen. This can include insurance companies (really any company that needs to verify your identity)


That’s what I do. But it’s a little bit of pain to unfreeze your credit with three bureaus when you want a new credit card. Wish there was a way to do this in one place.


After the first time unfreezing, I put the website URL, unlock pins, and concise instructions for all 3 as a single note in my password vault.

Doing all 3 takes ~5minutes now - which can usually happen in parallel with whatever paperwork the vendor needs to get in order.


It’s a great idea! I only unfreeze my credit for big purchases like buying a house or car.


i don’t think credit freezing matters too much in this case because the leak wasn’t tied to SSN, name, etc. that would be used for identity theft. it was phone call and location data. much worse for privacy but less useful for financial fraud.


It sadly does matter for anyone who applied to work at advance autoparts , though. Their SSNs and the like are out there; the company's main database was hit.


I typically don’t “freeze” my credit but do have a handful of services actively monitoring my credit for free (have been involved with many data breaches) and it’s included with my credit cards.

> A credit freeze restricts access to your credit report

So if I freeze my credit, this will also deny access to the monitoring services AND financial institutions, right?

Side note: financial institutions often do “soft” credit pulls on active account holders to determine if they are eligible for credit limit increases. Have been growing my existing credit line for some time now without having to obtain additional credit cards. So far, close to $500K in unsecured credit.

Seems more like a nuclear option.


You can also freeze your non-credit banking:

https://www.chexsystems.com/security-freeze/place-freeze

It was recommended that I do this after a checking account was opened using my identity.

As others have stated, my default is "frozen." I put temporary thaws on when applying for credit, though in some cases, you'll be informed exactly which agency/agencies will be queried, and may not need to unfreeze all of them.


This is a great tip as most people only know of the big 3, thanks for sharing


what app or website do you use? Seems like you have to sign up for all three websites? Equifax Experian TransUnion?


I keep my credit frozen all the time, but still keep getting alerts about new "no credit check" bank accounts from companies like chime.com. Then I give them my PII again just to verify and close those accounts, even though I don't have any business with them.


While this is good advice, it's important to remember that we shouldn't have to do this.

Credit companies take our data, without consent or compensation, then turn around and charge you if you want to prevent abuse of that collection. It's a racquet.


Fuck that. I'm gonna open a bunch of credit cards, buy a bunch of cool shit, and when they ask me to pay my bill, just say my identity was stolen.

If I have to fight the credit bureaus anyway, I might as well get something out of it. Stealing my own identity seems pretty straightforward.


I was unable to get any of the three to verify my identity last I did this, and one of the three has never once in my 15 years of trying to get my free credit report let me actually get it.


I think you can go the paper route and mail something in to freeze


I couldn't find a reference to an app on the linked page, could you share more details on the app you use?


This is huge; also AT&T knew on Apr 19 but only disclosed now; ongoing fallout from the Snowflake compromise:

- Records downloaded from Snowflake cloud platform

- "AT&T will notify 110 million AT&T customers"

- Compromised data includes customer phone numbers ("for 77m customers"), metadata (but not actual content or timestamp of calls and messages), and location-related data. Not SSNs or DOBs. Mostly during a six-month period 5/1-10/31/2022, but more recent records from 1/2/2023 for a smaller but unspecified number of customers. TechCrunch [1] has more details including Mandiant's response, the name and suspects location of the cybercriminal group

[1]: https://techcrunch.com/2024/07/12/att-phone-records-stolen-d...

I wonder if Congress manages to summon TikTok-like levels of anger on regulating this one.


"AT&T reveals it has records of cellular customers calls and texts"

These records should have been deleted at the latest at the point where they're no longer relevant for billing. (Which also means that for customers with unlimited calling/texting, there shouldn't be any records in the first place.)


They keep all records for 7 years because the US Federal Government asked them to, not because they legally have to, but same with T-Mobile and Verizon: https://www.vice.com/en/article/m7vqkv/how-fbi-gets-phone-da...


Wasn't there some telco executive that was tossed in jail not long after 9/11 because he didn't want to play along with the government and keep data around forever?


https://en.wikipedia.org/wiki/Joseph_Nacchio

> Joseph P. Nacchio was the only head of a communications company to demand a court order, or approval under the Foreign Intelligence Surveillance Act, in order to turn over communications records to the NSA.[11]


AT&T is well known for working with NSA — 33 Thomas St [1]

[1] https://theintercept.com/2016/11/16/the-nsas-spy-hub-in-new-...


That doesn't excuse this. If these records only existed so they could give them to the NSA at a later time, that further illustrates the dangers of accommodating the agency's desire for access to data generated from the U.S. Telecom backbone.


If they are obligated to give the data to the NSA, they should give it to them in real time and then delete their own logs as soon as they no longer need them.


It does explain it though. By coincidence they also get billions of dollars in federal subsidies


So do other ISPs. Yet AT&T is by far the worst of all of them with regards to customer privacy.

Did you know that AT&T has a commercial product where they sell Metadata of websites visited (unclear if it's only Netflow or if it includes DNS lookups too) to law enforcement and private investigators?

AT&T is a blight on the privacy of U.S. citizens.


> Did you know that AT&T has a commercial product where they sell Metadata of websites visited (unclear if it's only Netflow or if it includes DNS lookups too) to law enforcement

Do you think that only AT&T does it ? Welcome to democracy, my friend. /s


For their landline customers? I'm not aware of any other ISP that's so shamelessly brazen about the practice.


I wish that were the world we live in.

This is from the Snowflake breach, meaning this database was an "AI Powered Unified Data Platform." It almost feels like the erosion of our privacy is fueling the growth of allot companies.

I really hope that the boogeyman is real and all this was worth it.


I believe this practice was followed only in postwar France, and I think even there has long been jettisoned. It's been a while since I got a French phone bill though.


Why would AT&T even need to keep this data?

All i can think of is billing for a fraction of plans from the early 2000s who still pay per min/per text. Or maybe for capacity metrics but even then you only need the overall data point not the actual records once collaborated.

What's the US law for keeping data as long as its relevant and needed?


Text meta data is an important distinction

Still not good, but headline feels clickbait if I think my text messages leaked


That's still pretty gnarly in terms of social graphing though.


Is this leak why the spam next messages have gone from “Hi how is your day ?” or “Hi [not my name] please do thing X. Of you’re not [not my name] I’m so sorry perhaps we can be friends.” to “Hi is this [my full name]?” or “Hello [my first name] how is your day ?”


Any leak with your mobile and name pair could have done that. As a non-AT&T customer, I get the my-speecific-name pig butchering texts, too.


True. They’re brand new to me though. I’ve been getting the former for years, the latter for only weeks.


Events like these will only become more prevalent as more personal, corporate and other information is digitized and stored by organizations too busy with other things to 100% button down their data (possibly an impossible thing anyhow), or simply too inept (a very common thing). There is a possible good side to it though, that it makes everyone, not just a few lone souls, much more conscious about privacy and rampant personal data collection, perhaps enough for a sea change in habits in the corporate and consumer worlds.


Why don't organizations hide their servers behind data diodes? Store everything in an air gapped network with strictly defined interfaces.

I've been wondering this since the Office of Personnel breach[1] back in 2015.

[1] https://en.m.wikipedia.org/wiki/Office_of_Personnel_Manageme...


And corporations like AT&T are themselves immune to having their own identities stolen (my notes: https://win-vector.com/2024/07/12/yet-another-way-corporatio... ). Corporate EINs (the US corporation equivalent to US social security numbers) and public. Knowing one doesn't let you commit identity theft and credit card against a corporation (unlike the case for people).


I find it interesting that in your typical BigCo breach, they are at pains to point out that credit card details were not stolen. I infer from this that something about credit cards, and how they are secured, has real teeth and BigCo's lawyers are trying to stop them biting. Is this PCI-DSS? Maybe someone can comment.

As far as this breach goes, I think it just confirms my gut feel that Snowflake are heading to the wood chipper.


I think it's a desperate attempt to downplay the severity in any way plausible, taking advantage of the fact that credit card numbers and social security numbers have been mythologized in the American consciousness as nearly-mystical totems of identity and security, as part of the "identity theft" meme, even though they play little role in actual information security or privacy.


At the scale of this kind of incompetent failure, no human being should be on board with the narrative that we should be blaming "criminals" for this

If we don't hold companies accountable for keeping far more access and retention than should be legal, and securing their systems poorly, this situation will never get better


Who is the "we" here? And how should companies be held accountable?

It's very rare for someone at the highest level to be held to any kind of liability, and paying fines rarely, if ever, causes these too-big-to-fail corporations to materially impact them.

Strictly speaking about the US here.


> paying fines rarely, if ever, causes these too-big-to-fail corporations to materially impact them.

That means the fines aren’t big enough. They should probably be scaled according to the business’ revenue.


From a justice perspective, it should be scaled according to the number of customers impacted (and how bad the impact was). Which is likely to be about the same as scaling with revenue.


Justice isn't served if the impact of the penalty doesn't force change. If a company can harm millions of people but the financial damages we can assign to that are lower than the cost savings of the decisions that caused the problem at the scale of a large business, the business only has the logic of finance to care about, and that logic almost always says "wellp that was still the right call"

If our only tool is fines, we must scale those fines not by some monetary definition of the harm, but by what will make the necessary impact on the decisionmakers involved.

I think we should use tools other than fines, like criminal conspiracy liability for controlling shareholders and executives, and the threat of dissolution of businesses to pay out to the victims, but if it's fines or bust, the marginal value of dollars is just on a different scale for these businesses and we should grow the fines accordingly


Needs to be at the level of enforcement by regulatory agencies, large scale lawsuits backed by state governments, and maybe even congressional action

These companies have scale as their moat and that's called a monopoly. We need to be aggressively pursuing corporate malfeasance, closing loopholes, and breaking up companies. In my ideal world the entire doctrine of the "corporate veil" would be overturned, but that seems unlikely to happen without drastic upheaval. Antitrust action and large-scale suits can happen and to some degree those wheels are already in motion, but it would help a lot to stop buying this bullshit about how we should think of this as a "crime" for which we should uniquely blame hackers. These megacorps want to pretend that they and their customers are in solidarity as victims of the hackers. In reality, these companies get hit with essentially none of the consequences, and their practices are most of the relevant causal factors. A better model would be that the customers (and often non-customers on whom they collect data without even the figleaf of manufactured consent) are victims of the companies and the hackers


These companies are so massively large that they price in the risk of databreaches as a cost of doing business.

Insurance Underwriters pour through corpo infosec documents, and require only the most basic level of protections.

I think instead, a stricter certification standard needs to be created, and all these large companies must pass ANNUAL audits, or simply lose access to government leased spectrum.


It seems that we agree that regulatory enforcement is a great framework through which to make this happen. I think we should regulate both security and data retention far more aggressively, and be willing to destroy companies if they fail to comply. The lack of an existential risk makes it easier for them to maneuver around other solutions


> These companies are so massively large that they price in the risk of databreaches as a cost of doing business.

Just make the fine a % of the annual revenue and that will change.


Unfortunate as it is, nobody genuinely cares about:

1. Preventing data breaches

2. Properly anonymizing aggregated personally identifiable data

3. Having and using a secure ID and verification system


I am seeing this mentality as well, and it's disheartening. My company manufactures and sells a privacy-first, fully autonomous, on-prem, video security system for home and SMB. Yet, some people choose a cloud based service (convenient) and are surprised when their private data is either a) hacked, or b) abused by the provider's own employees (see the latest Amazon Ring settlement).

With the latest scandals and breaches though, I feel it's gradually starting to change.


They don't care because they don't know how the systems they use daily work, much less the costs and risks involved.

If they knew, they would care, and that's why representatives care on their behalf.

You could say the same about health and nutrition, but people very much do care when a medical issue tangibly affects them negatively.


I would like to sue AT&T in small claims for this and for leaking my Social Security number. But it's difficult to prove damages in these situations.

Does anybody have any advice? Proving damages means showing actual monetary harm.


You likely cannot file in small claims and would need to pursue arbitration instead.

> Please read this Agreement carefully. It requires you and AT&T to resolve disputes through arbitration on an individual basis rather than jury trials or class actions.

https://www.att.com/legal/terms.consumerServiceAgreement.htm...


- AT&T will usually pay all of the arbitration fees (with some exceptions).

That could get pretty expensive for them quickly.


And look for Arbitration clause in your contract. Might limit your options.


I was not a customer with AT&T when they leaked my Social Security number.


IANAL but this would seem like a “class action” situation.


At&t customers are bound to individual arbitration so there will be no class action lawsuit for this.

> Please read this Agreement carefully. It requires you and AT&T to resolve disputes through arbitration on an individual basis rather than jury trials or class actions.

https://www.att.com/legal/terms.consumerServiceAgreement.htm...


I was not a customer of AT&T when the leak happened.


I also ANAL but if I recall correctly, you can decline to be represented in the class, and file your own lawsuit


Very difficult to run these days. Since 2018, federal courts have ground away many of the legal routes needed to run a successful class action suit against a national or multinational corporation.


can't wait to get that check in the mail for $1.32


I got a check in the mail last week for 12¢ from Google hoovering up my data. Yes, that's twelve cents!

Google certainly made more off of my data than that.


costs more to mail a letter


True but personally I also wouldn’t want to go through the time and expense to sue them solo. At least in a class action the company faces some penalty that’s possibly meaningful to them (even if it’s not meaningful to most of the claimants).


Why is it "nearly all"? Which customers didn't have their data stolen and why were they magically left aside of this? It's obvious the data theives had complete dominance in the system so what query did they run to get only "nearly all"?


These are all security nightmares aren't they? It smells as if all the resources went into delivering billing, then barely enough for technically working service, and then is there even anything leftover for security (instead of this being part of the foundation of a service)?


Something happens when you tune your business only to the things you can measure.

I still (or at least try to still) have this naive opinion that if you make a good product, the money will come.

We sometimes spend too much time counting the beans and not enough time growing them. Not saying you don't need to count the beans, you do, but when your whole team is counting, they may forget to water them.

Also - to be on topic - don't forget to protect the beans!


It's disgusting that we still write headlines as "hackers steal" rather than "enormous company fumbles security for data they should never have retained"


That is a good reframe.


How can I upvote this a million times?!


It's interesting when you have these old, large, sprawling bureaucratic organizations and the employees hardly give a sh!t anymore and allow for these large vulnerabilities. It's not a money issue, it's a caring issue I think.


Our economic system is at odds with security because we're trying to "get by" as cheap as possible. That doesn't bode well for protection of users' data.


During the last decade, ATT’s leaders decided to burn tens of billions of dollars by overpaying for obviated businesses like DirecTV and Time Warner.

I can only imagine the quality of mobile and fiber networking we could have had if that money was spent on telecommunications. And maybe they would have spent a few million on having proper security.


Not only that they blew $8 billion/year on dividends that could've gone into the business or to employees instead of being extracted and given to people who have nothing to do with the business.


When people invest in a business, whether it be your sibling’s business, or a local business, or a publicly traded business, they do it because they expect a return on investment.

An infrastructure utility such as ATT typically has to offer dividends because it is not going to experience the type of growth that would result in a return via share price increase.

Of course, ATT’s prices are not regulated like a proper utility, even though they should be, but it is still subject to the same market forces that prevent it from growing like a tech company would, who would have the option of foregoing dividends (or share buybacks).


Tangential, why did you/anybody spell "shit" like they are evading Tiktok language filters?


Unbelievable that they do not enforce 2FA for a client that huge. Absolute madnesss!


What's odd is until August of last year I worked for AT&T and had to do 2FA for accessing almost every internal site I used, and that extended to most SSO integrated external sites - including the relatively small number of Snowflake instances I worked with.

I do know that not every employee designation required universal 2FA but more or less all IT/ATO staff did.


And, honestly, how is this info (which I WOULD want to know) meaningfully actionable to customers. We get our information stolen from a myriad of sources everyday. These companies do comparatively nothing to make things right and the burden falls on customers to pick up the pieces if you're in a tranch that is sold and used.


Of course it's not meaningfully actionable to customers, big time lag in not disclosing since Apr 19. (Why does this not fall under SOX violation with the obligation to report timely to affected parties? It has affected AT&T's stock price -3% in early trading, so should it have also required SEC disclosure?)

Wondering what is the significance that most of the stolen records were from the period 5/1-10/31/2022? Does it mean that AT&T enabled 2FA on more recent records, or that more recent records were on a different cloud bucket (or that they mostly stopped using Snowflake since)?


Because AT&T reported it to the FBI and DOJ, they in turn requested AT&T to not disclose it and there are exceptions in the SEC rules for exactly that scenario of actively working with law enforcement.

Regarding 2FA, it probably means they just enabled it in their access rules for any access to snowflake, but it's highly unlikely AT&T will walk away from Snowflake anytime soon because it had become their preferred BI/Data Analytics platform and they were actively migrating several hundred TBs of data out of Hadoop to Snowflake.


In the email I got from AT&T regarding this data breach was: "Protecting customer data is a top priority. We have confirmed the affected system has been secured. We hold ourselves to high privacy standards and are always looking for ways to improve our security practices."

Well, now I feel better. 8^)


wow, a spy agency acquired the entire social network graph of the usa in one intrusion. that's bad news for civil defense; it means they have a good guess at who is the favorite relative of each legislator, governor, police chief, or general. and where they can habitually be found at each hour of the week, since this leak included location data!

how can we keep such accumulations of sensitive data from arising in the first place? only countries that figure it out are likely to survive the turbulent coming decades


How do you know it was a spy agency? Sounded like just a hacker group. I assume 5 eyes are the only ones who have this already anyway as a matter of course. All they have to do is buy it from AT&T, no hacking necessary.


it seems unlikely that it was just for the lulz. if the intruders are auctioning off the data, do you think the russian fsb, the ministry of state security, hizbullah, mossad, or the usdoj will bid highest?

(the last, hypothetically, to destroy the data rather than use it for leverage in investigations—if not, it's in effect just another spy agency)


Would they destroy only the hacked stuff? All the good info is still with the company..they can be hacked again.


sadly, they will


"While the data does not include customer names, there are often ways, using publicly available online tools, to find the name associated with a specific telephone number"

In other words, your phone number and name is likely in a public record somewhere. It's not that private.

The info leak should not have happened but in the grand scheme of things it's not that big a deal. "The content of the calls and messages was not compromised." The worst it does is reveal who has been sending messages to or calling each other.


That metadata was can be terrible for many people like politicians, those having affairs, drug dealers or buyers, those with sensitive healthcare providers, and so on.


This. If you're in an abusive relationship and your abuser sees that you're calling a lawyer, a helpline, a family member etc, bad things can happen quite quickly. This information is non-public for a reason, and you don't have to be a drug dealer to be protected by it either.


Yeah it's not good. But would be worse if the actual contents of the messages had been leaked.

That said the few abusive people I know are not smart enough to find data dumps of AT&T call records on the dark web. Nor could they pay for them. Nor could they likely make sense of them. But I'm sure some could.


AT&T bought into a significant amount of DirecTV - so much so that everything that had the DirecTV logo on it was changed to the AT&T logo, such as the invoicing. So the AT&T customer base has included, for several years, the Directv customer base. The article doesn't attempt to clarify who the 'nearly all' customers are, and some people will jump to the conclusion that it is the cell phone customers. But it could include the DirecTV customer base whose data is also at risk.


AT&T didn't just buy into a significant amount of DirecTV, they owned DirecTV. As in, 100% ownership. So yes, all DirecTV customers were AT&T customers, because AT&T and DirecTV were not separate entities. It wasn't until 2021 that DirecTV was spun off into a separate company again, but still with 70% ownership by AT&T.


AT&T does a lot more than just cell phones. Probably also the largest US ISP behind Comcast, I'd expect. I had AT&T fiber to the home at a previous residence, and that was a great product. Far superior to Comcast.


It's one more reason to use an end to end encrypted messaging app like iMessage or Telegram. Even WhatsApp is end to end encrypted. Don't use SMS/RCS.


unless I'm misunderstanding, the same data could be pulled from those services.

the message content wasn't leaked here


You would effectively be able to cross reference this meta data with 2 factor authentication services. It’s probably time to start removing this option entirely.


How would cross-referencing be useful? You’d just find out what services people use?


If GitHub always uses the same number(s) for 2fa and there are outgoing texts to your number then the connection is obvious. I’ve read that sim jacking is somewhat common and this would be a good data point.


So, just for discovering what services people use?


I guess after mapping the services used you would find the accounts worth going for and those become SIM swap targets


Seems like there's a lot of cross referencing well beyond MFA that this'll likely be used for.

Way easier to target phish people's bank logins, if you know what banks they are regularly communicating with.


Interesting that they use the word criminals instead of hackers.. makes it sound like it was a physical heist rather than poor security practices on their part :)


They are criminals.


Another article[1] cites AT&T's Snowflake deployment as the source of the breach:

> It’s not clear for what reason AT&T was storing customer data in Snowflake, and the spokesperson would not say.

[1] https://techcrunch.com/2024/07/12/att-phone-records-stolen-d...


The headline could equally say "AT&T kept data for criminals to steal".

If wiretapping laws didn't exist then most of this data would not be justified to exist. Flat-rate billing doesn't need to keep track of this information. Even usage-based plans could keep cumulative records rather than individual ones, or at least delete them at the end of a billing period.

Where there is a trough, pigs gather.


The data can be used for traffic analysis (number->number call data); "no PII" except it's pretty easy to match a number to a likely user.

I'm an AT&T customer, and in my case I don't have a risk, but I can imagine this info could be very handy for divorce, custody, and corporate IP lawsuits. So worse than it might look to ordinary folks.


>> AT&T said it learned of the data breach on April 19, and that it was unrelated to its earlier security incident in March.

Why was this not disclosed on AT&T’s earnings call on April 24? At least someone will get compensated for the breach, although it’ll be the lawyers for the class action lawsuit that’s about to hit instead of the customers that got their information stolen.


Including all location metadata associated to that?


The reports said celltower-level location data associated with calls and texts (but not datestamps). That would allow inferring their homes, job location, commute, family members, social graph.


You can still recover that without timestamps. It also looks like if anyone interacted with an ATT customer or used an MVNO your data is in there too.


It even said land lines had their numbers in the data if an ATT customer contacted one.

Edit: I must have read that from a different article than the TFA though.


Yeah, all att customers, 2nd party participants and any other user of their network. Not just direct customers.


Big breaches like this are gonna be wild with advanced GenAI. Combing through the shit for the diamonds provided some degree of limitation on the impact of big breaches in the past but all those calls are going to be accurately transcribed and mined by AI and the attackers are going to have a buffet of products and targets laid at their feet.


It's just metadata, no transcription of calls can take place. In the future, please read the article before engaging in the discussion of its content.


Metadata can be identifying enough. For example, given someone has this data and some local LLaMa variant on their machine, they could theoretically run a query like: "Give me all of the people that $NAME have called to, sorted by the number of times they called each other"


> It's just metadata

That's what they always say, honey, before calling the police. /s


My first question is: why was the data being stored by a third party in the first place?

Shouldn't data like this be stored completely independently of the Internet? Yes, I realize that does not guarantee it is secure since there has to be some point of access. On the other hand, it would reduce opportunities for people to breech the databases.


Because they don't care about actual information security, they care about "national security." They optimize for giving all branches of US law enforcement, from the federal to state to local level, access to 7 years of historical data whenever they claim they need it.


I don't buy into that theory, at lrast in this case. There are other ways to hand-off data when it is legally requested. On the other hand, such data would be valuable to foreign actors who do not have a legal means of accessing such data. It would require a high degree of incompetence to sacrifice national security in the name of convenience.


I might be lone wolf here but I kind feel pity for ATT I dont know why they are solely getting all the loathe here . actual incident occurred on public cloud provider who had not provided secure tools practice to their customer. so in this customer getting blamed for buying service cloud provider lack of best practices.


I am an ATT user and on a pixel which generally good at filtering spam messages. I have noticed I was getting so much spam messages recently ("wanna make money working remotely for x hours a day only") I was surprised and thought my number somehow made it to one of those spam networks. This confirms my suspicions.


WHY IS THIS DATA EVEN AVAILABLE TO BE DOWNLOADED??? Why do we not have protection in place so that hackers can't even download this data even if they wanted to?? What purpose does 2 year old data serve AT&T except to monitor us and to create social networks of people and associations?


Er....exactly.


> The company said the hack wouldn’t be material to its operations or negatively impact its financial results.

And this is why consumers will continue to see their information compromised by companies who collect and retain more data than they need and then fail to invest the time and resources to protect it.


Would be great if some of the smart people here could help explain why this is such a big deal to my less tech savvy friends. I know that I don’t know how the data broker to dark web hacker pipeline works, I just know security is important. But my family is like “big deal”.


Snowflake might want to take this page down in light of today's news.

https://www.snowflake.com/en/customers/all-customers/case-st...


When are we going to see the technical report of what happened? Since this data has a specific time frame, it makes sense to me that a backup was stolen. But, we'll see.

My guess is that the tech leaders a AT&T are going to have sore wrists for a few minutes because of this.


This is a political problem. Until we pass laws that companies can be find liable for significant damages in the event of data breaches, we will see little progress on data security. This is an area where Congress needs to act. Current law does not adequately protect the public due to the difficulties in establishing standing, tying specific breaches to specific personal damages, other reasons.

Such a law would seriously impact current practices of the majority of IT firms, including small app developers, which is why we see little push from silicon valley for such changes.


I read an article in wapo that said you can use this URL to see what data was exposed: https://www.att.com/event/lander


AT&T - too big to jail, worst UX, worst service, and worst customer service ever. Until CEOs end up in prison, nothing will change and there will be no consequences. It will never happen because money has more votes than citizens.


> Snowflake blamed the data thefts on its customers for not using multi-factor authentication to secure their Snowflake accounts, a security feature that the cloud data giant did not enforce or require its customers to use.

And is that going to change?


This is a diversion. Why did they build a system that permitted a bulk database dump of hundreds of millions of rows even with 2FA?


> Why did they build a system that permitted a bulk database dump of hundreds of millions of rows

Should all databases be capped at a few million rows total or something? I don't quite understand where you're going with this.


Because that’s what a data warehouse is? You’d think they’d guard them more, though.



So, what is the actual threat from this? That someone now has my phone number (already public) and knows that I have called or texted with some other numbers? What is the risk in that? It’s not clear.


Well for one thing they can start figuring out who is not yet registered on Signal but would likely be in the phone contacts list of a rich person's number that they know. Social engineering attacks succeed with less.


They even got the data of former customers, like 10 year ago customers. That should be illegal. Your personal data should be deleted after you are no longer in business together.


> AT&T blamed an “illegal download” on a third-party cloud platform

WTF does this even mean?

The cloud employees downloaded it? If its so sensitive, why wouldn't this be heavily e2e encrypted?


This is related to the snowflake breach. Snowflake is blaming customers for not enabling MFA.


Looks like more than enough blame to go around. Not enabling MFA is pretty egregious by ATT. Snowflake creating a platform where such a high consequence mistake is apparently easy to make, and obviously without sufficient compensating controls to detect or limit impact of such a single point of failure. That's egregious too.


In EU, this would have been a huge scandal. This would involve huge fines and the company would really try their best not to be so sloppy with data protection. But they are not in EU.


It would be interesting if any ex-customers, living in the eu are affected - they may be covered by gdpr (though unlikely).


Be nice to have a new federal law: you get breached, you pay $5K plus lifetime credit monitoring to each person involved. Non-dischargeable by bankruptcy. No arbitration, no lawsuit. You pay.


Interesting idea, though I think that having it be $5K (or any fixed amount) no matter the size of the company favors large companies, since large companies can probably spend more to reduce the risk of getting hacked. Hell, it might even incentivize large companies to fund hackers to breach their smaller rivals, in order to wipe out their competition.


So is this data fair game to be used by lawyers and cops in the US?

I guess maybe a cop would still need a warrant to use the data, but what about civil court cases?


They have a back door to the switches. They don’t need this.


Does any organization, anywhere, alarm when a port exceeds a couple dozen TB of data? If they can lock down every phone use to a GB/month…


So, AT&T wasn’t using MFA?

A lot of information can be derived from analysis of call records. If this information becomes public, it could be disastrous.


> If this information becomes public, it could be disastrous.

Isn't it even worse if it doesn't become public? It's been downloaded by an unauthorized party after all, so if they're not publishing the data, I'd wager they've found another way to profit from it. I.e. blackmail or similar.

I guess it depends on your viewpoint wherever that's better or worse.


I about 6 years ago Iwas seriously wondering how snowflake could move so fast while keeping customer data secure... welllllll.


I would say ATT ran afoul of a bunch of CA laws by putting this data on snowflake to begin with


Some new news in the article and comment:

- [security expert] "This [logs without timestamps] isn’t one of their main databases; it is metadata on who is contacting who. Its only real use is to know who is contacting whom and how many times."

- [commenter] "I have a theory that this call log was being used for a national security investigation. Otherwise why would this rise to the level of public safety/national security exemption?" [with two DOJ-approved 1-month delays for disclosure]

So, someone set up a separate Snowflake instance with mostly May-Oct 2022 AT&T data (90% former customers) apparently for that purpose. And left it up. Will anyone in Congress (e.g. Sen Ron Wyden) ask who did and why? (Another commenter on HN pointed out that Roe v Wade was overturned 6/2022, presumably that was not the intent of the original national-security investigation, but there's a potential for privacy abuse by the hackers' customers beyond everyday spam)

- In early 2023, Snowflake set up a unit especially for Telco data. But when you read the blurb (below), this product is not aimed at the telco's use-case; coincidentally this was also around the time Snowflake was touting integration with GenAI.

"Unlocking the Value of Telecom Data: Why It’s Time to Act" https://www.snowflake.com/blog/telecom-data-partnerships/

"Telecoms are the connecting tissue of the modern economy. They run everything... growing importance... hyperconnectivity.

What makes telecom service providers unique is that they have access to consumer location data. For most other industries, a consumer can go into their phone’s privacy settings and turn off the location access in the smartphone app. But in the world of telecom, as long as the phone is connected to a network, the telecom provider can use triangulation to find the approximate location of a consumer. This is why there is an emerging trend of companies [which ones?] building partnerships with telecoms to power use cases across multiple industries from competitor intelligence, alternate credit scoring, hyper-targeted marketing and more.

... Yet, despite the importance of telecommunications for society and in connecting industries, network operators are not yet fully embracing the value of the data they have at their fingertips"

But the value of this data (90% former customers) was clearly not to the telco itself... so who is the unnamed partnership and who is the end-customer? And was one of Snowflake's AI partners involved?


> Its only real use is to know who is contacting whom and how many times.

Which is exactly the type of info that would be used to find evidence of an affair.

Though this is specific to SMS so it would not include iMessage or other messaging apps.


Do organizations like Planned Parenthood offer SMS support?


Didn't Congress already rubber-stamp AT&T sending the NSA this data?


That's an enormous amount of data. How do you not notice a huge, network-hogging data flow?


> That's an enormous amount of data. How do you not notice a huge, network-hogging data flow?

No it isn't. Not even close to some of the larger data sets that Snowflake most likely manages.

We're talking about the public cloud. You don't "hog" AWS's network with a one-time download in numbers like what we're seeing from the article.

Let's be generous and estimate that there are 1k records for each customer. That's almost certainly an overestimation for the time period that TFA specified, but for the sake of argument let's run with it. There are about 100M customers. So that's only 100B records. Assuming each record is on the order of 1kB in size, again likely a huge overestimation, then that would be just 100TB. AWS would charge $7k to egress 100TB, which would be a rounding error in AT&T's cloud spend.

The real amount is most likely less than half of that, if not a quarter.


No dates or timestamps included meaning they were using the data to build a social graph.


Feds crucified weev in 2010 when he notified AT&T of exposed user data


The only "criminals" is AT&T for leaving the doors wide open.


I look forward to receiving my 30 cents in settlement money in five years.


Or they just sold it / gave it to NSA and needed a cover story…


Haha on bro we tried our best but they asked nice on bro


Why did it take them over a year and a half to disclose this?


It didn't. The breach happened in april of this year. The data is from 2022.


Something, something, national security?


The DOJ approved two 1-month "delay periods", first in May, then in June, as part of the criminal investigation. We found that out earlier this morning, see earlier discussion.


when will governments hold these companies, but more importantly their executives, criminally liable for their lack of protecting customers' information?


When they will not buy data from them. /s


No mention of ITAR issues? In the comments?


The old-timers remember a term: “dark fiber”.

There’s going to be a lot of “dark compute” once we throw these lazy assholes out.

Speaking for myself, I’m thinking of what the economics look like when HBM is abundant.


Nice way to rule out who is a spy or not. Nice.


There's no way to make the software perfectly safe from hackers and from social engineering. So, yes, companies should be more careful with the data and, yes, the data shouldn't be kept forever. I agree companies should be doing more to protect the data.

I see lots of outrage at the companies and why isn't the government doing more to punish them and how do I get compensated ...

But, I feel like everyone is blaming the victim. Is it the home owners fault when someone breaks in and steals stuff?

Where's the outrage at the hackers breaking into these accounts? Where's the "why aren't the governments tracking these people down?" Why is no one demanding that the hackers be brought to justice?


The problem with analogies is that they're a leaky abstraction. You're comparing a single person with maybe a handful of employees to a giant, multinational corporation with corporate offices, hundreds of thousands of employees, enough real-estate to create a small country, and billions of dollars per year in revenue. It's a false equivalence to compare this to door kicking like it was some kind of petty theft.

They literally kept everyone's information in a machine that was connected to the internet and then didn't make any effort to treat that with the gravitas it deserves. They are not the victim here, we are. It's a little shameful that you don't see that.


> But, I feel like everyone is blaming the victim. Is it the home owners fault when someone breaks in and steals stuff?

> Where's the outrage at the hackers breaking into these accounts?

The internet is essentially every hooligan in the world about to kick in your dooor. So yes, I blame the home owner.

It seems silly to me to condemn anonymous users of the internet.

Back in the days when nothing of importance was done on the internet the view was way more healty.

If you have sensitive data, don't expose it to the hooligans. Easy as that.


> There's no way to make the software perfectly safe from hackers and from social engineering.

This is a straw man argument. Companies should use best practices in order to prevent most intrusions. When they do not, as in this case, criticism is warranted.


“Brad Jones, chief information security officer at Snowflake, told CNN in a separate statement that the company has not found evidence this activity was “caused by a vulnerability, misconfiguration or breach of Snowflake’s platform.” Jones said this has been verified by investigations by third-party cybersecurity experts at Mandiant and CrowdStroke.

AT&T said it launched an investigation, hired cybersecurity experts and took steps to close the “illegal access point.””

That's pretty rich: “it wasn't misconfigured, it was just illegally open, and now we're closing it”.


this breach is of course appalling. But nearly as appalling is the experience of _explaining why this matters_ to non-technical friends who stare at you with blank, distracted eyes, but only for a second; for their phone (yes, the very phone that just exposed them to uncountable future ills) has chimed.

I have nearly given up; like smoking, it will be decades before the harms are understood. We have to wait for your neighbour's brother to have died in a targetted political killing, because someone didn't like his Substack and borrowed the number and likeness of a friend; for his daughter's credit score to have been crushed by an anti-abortioneer who borrowed her face and likeness and number knew her first-grade teacher; for his son to die a death of despair, after making the wrong friends, and getting doxxed along with the rest of them.

This should be a five-foot headline moment. But no; CNN will lead with Biden-mumbles or Trump-grumbles.

How is it that the things that are killing us --- inequality, climate change, privacy collapse -- all have this same shape? Hamlets, all of us.


I think humans are like corrupted, selfish and evil LLMs that like to think Utopia is possible if you think about it that way it’s super easy to understand


If AT&T has the power to sell said data to whichever 3rd party it wants, why should this bother me?


just put all information (names, addresses, ssn, DoB, etc) on a publicly visible blockchain already.

Then there is no data left to breach.

Instead develop systems to audit the usage of that blockchain and send to jail/military anyone who attempts to use that information in an unauthorized manner.


Airliner crashes would be as common as data breaches if regulators set the same expectations.


did they just enumerate an open web endpoint for it or something?


API based credentials are just username + password in this context, nothing else seems to be restricting access to data. So if your Snowflake tenant isn't enforcing IP restriction to limit source auth attempts, those creds can be used to pull the data from any source IP.

Even then, you'll still have an HTTP 403 response layer filtering those auth attempts based on IP... where we can assume these failed to implement it.

So far between TechCrunch, Wired, and other reporting it seems most claim creds get owned, sold, then used against under-restrictive Snowflake tenants which are exposed by default.

i.e; https://epa06486.snowflakecomputing.com/console/login#/ here's someone's tenant, if you were able to go buy some creds for it, should walk right in.

[edit] I have a more detailed Snowflake comment with references that might fill in better gaps here; https://news.ycombinator.com/item?id=40554753


You can use oath or rsa keypair for service account auth


The data was stored in a cloud data warehouse called Snowflake, which had a major breach recently.


Same hackers as Twilio :) no amount of security would have prevented this


so just metadata, not the actual texts or PII


“Just” is a dubious adjective in this context.


some of the reports make it sound like the hackers are reading everyone's salacious texts


In a world where it's illegal in some places to help someone cross state lines for healthcare, phone records don't have to include content to be dangerous.


your phone number is PII and everybody you ever called or texted is VERY VERY PII


Joining the dots on the facts so far, people don't seem to have grasped the apparent huge significance:

- guessing it was some GenAI startup looking into consumer tracking, alternate credit scoring, surveillance or other national-security use-case.

- Very unusually, the DOJ ordered two ~month-long "delay periods" in disclosure: ("The Justice Department determined on May 9 and again on June 5 that a delay in providing public disclosure was warranted"). Yet this didn't happen for Ticketmaster or MOVEit breaches revealed around the same time. "Cybersecurity delay period requests" is a new power quietly authorized by the DOJ+SEC+FBI, 18 Dec 2023 [0]. Note that [1] emphasizes this as "Corporate Alert - guidance for delay requests [on SEC 8-K]". Might Congress already have known/suspected, when it authorized the cybersecurity delay request powers, of the Snowflake/AT&T breach? Either way, whoever is involved seems to have very powerful friends. Also, the big FISA renewal vote was Apr 19 2024 [2].

- Seems the cloud instance was set up the same time GPT-4 was released (March 2023), also when Snowflake set up a Telco business unit [3] ("Location data... Alternate credit scoring, hyper-targeted marketing and more... an emerging trend of companies building partnerships with telecoms to power use cases across multiple industries"). This product is not aimed at the telcos' use-cases, but at new revenue streams. (Who might the unnamed Snowflake AI partner(s) be?)

- They set up the Snowflake instance with AT&T/MVNO customers with timestamps removed, but with location data, yet the phone numbers not obscured or removed. Doesn't sound like "internal analytics" or "competitor analysis". What sorts of end-users want to pay for the entire social-graph of 110m, regardless whether those customers never make a phone call again? [EDIT: I confused the details of this AT&T breach with the other (2019) one disclosed on 3/2024: 77m AT&T/MVNO customers, 90% of them former customers]

[0]: "FBI Guidance to Victims of Cyber Incidents on SEC Reporting Requirements: FBI Policy Notice Summary" https://www.fbi.gov/investigate/cyber/fbi-guidance-to-victim...

[1]: "US Corporate Alert - DOJ, FBI, and SEC provide guidance for delay requests relating to disclosure of cybersecurity incidents under form 8-K" https://www.klgates.com/DOJ-FBI-and-SEC-Provide-Guidance-for...

[2]: US House approves FISA renewal – warrantless surveillance and all https://news.ycombinator.com/item?id=40041784

[3]: Snowflake cloud Telco unit, 4/2023: "Unlocking the Value of Telecom Data: Why It’s Time to Act" https://www.snowflake.com/blog/telecom-data-partnerships/


Dats cuz swifties don’t like Ticketmaster boo Ticketmaster (& hov)


hold AT&T responsible. their officers. prison time. or this kind of carelessness with millions of people's lives will keep on happening if officers get million dollar paychecks they must also risk criminal penalties to balance out


Holy shit. If true …. Wow. This can be used for all sorts of evil.


damn


And this is yet another reason why I use signal


I hope you didn't sign-up for Signal with an AT&T-tied phone number. Else this breach would've probably exposed your PII either way.


I did not, and even then, none of my call logs or texts via signal would have been included, regardless of carrier.


Do you exclusively use signal? Do your friends also use signal? Do you have friends who only use signal to communucate with you?


I am working on this with mine, but even Signal is too weaksauce in my book. Ownerless (and ideally decentralized) p2p chat is what I am after. If everyone in my group used Android then it'd be Briar or Cwtch hands down for primary text/picture msg and SimpleX or Session or Jami as voice/video call and backup. Because there's an iphone upsetting everything that scratches Briar and Cwtch, so it's SimpleX reinforced with Orbot on my group's menu currently and it seems to work reliably. Session has terrible notification delays when in the background, they use the [IMO] boneheaded send-on-select abstraction within the selection gallery when attaching an image on their Android app (oh and your unsent typed text is wiped). Very unprofessional, needs a bottom-up redesign for its interface. Really has that everyone quit feel to it.


Do you make it like a fun game? Like when me and my friends in school would pass eachother coded notes and the cipher was an inside joke?

I'm genuinely curious: what was the pitch that you used to get others to start using signal?


Never signal because signal is bad on requiring too much metadata (your number). It was Session for a while but since SimpleX can be hardened with Orbot (or Tor on PC) and it was way more notifications-reliable, we switched. I would much prefer Briar or even Cwtch but an iphone in the group ruins that party.

Otherwise to answer your question it is a bit of a game. I also like to remind them how, being creeped out by Aunt Matilda putting microphones and keyloggers all over, at least Aunt Matilda [most likely] has better interests for you at heart. GOOG/AAPL/MSFT have no such kinship connection yet they are surveilling in precisely the same ways. That was a decade ago, now add in the Universal Function Approximators! *Demo stable-diffusion.* *Demo lm-studio.* *Present to them a performance of Orwell's 1984.* *Show them a few documentaries on social control.* "See? Now would you like to try it?"


Unironically yes. I'm in a bunch of different group chats with little overlap in signal. There was a huge push amongst my friend group to get people on it back in like 2015. I have some family not on it but we just talk in person.

Not everyone switched, but a surprising amount did, and only more have switched over time.


Yes.


Aside from a couple non-US friends, I know no one in the US who uses anything other than straight SMS (and Apple iMessage). I'm sure they exist but certainly not in the circle of people I communicate with.


Everyone I know uses signal. Different people really are different.


For whatever reason, chat seems to definitely encourage tribalism. The last company I worked for eventually bought into Slack because so many people WOULD NOT use anything else while a lot of us were like "ANOTHER chat app??" because we were perfectly happy with Gchat which we had as part of Google Workplace.

I know there are some historical reasons for non-SMS because of text pricing outside the US but everyone I know in the US would look at you funny if you wanted to use some special app for texting.


There's definitely different circles in the US. My circle of friends and family is on Whatsapp. More than 99% of my communications would be through WhatsApp.


Everyone I know in the US uses either iMessage or Whatsapp. No one I know uses MMS.


iMessage is very much a US thing. Most of the Non US people or people with international connection exclusively use messaging App ( whatsapp, Telegram, Signal)


Do you make it like a fun game? Like when me and my friends in school would pass eachother coded notes and the cipher was an inside joke?

I'm genuinely curious: what was the pitch that you used to get others to start using signal?


Not all my friends switched, I had one good friend who decided not to because she already had a bunch of apps and didn't just want to talk to me on yet another app.

It's much easier when it's a group. I got some of my family to get on it too and they pretty much exclusively use it to talk to me.

In the mid 2010s it wasn't that hard of a call because the various Google apps kept getting deprecated (we were all in hangouts before), iPhone users wanted something rcs like and they couldn't for android users with mms, in general the app scene was taking off with Snapchat wechat etc. so people were easier to convince to dl it.

My pitch was 'you know how randomly Facebook or YouTube will serve you some adds about something you were talking about about, even though you didn't search with them? You're much less likely to have that happen with signal'

Then if they pressed I'd share a link from the net neutrality fight days about DNS hijacking etc and having them remember when all their failed urls would go to an ISP run search domain

I definitely used some FUD but it worked.

Actually I think some of the FUD was 'what if the carrier gets hacked?'.... Which, I mean for all carriers and all systems is just a matter of time. As t-> inf the probability of a breach converges to 1.

Also if any of your friends do drugs, of any sort, that was a great motivator for them to switch lol. Weed has only been legal for recreational since 2013 in any state.

Oh, and pretty much every techie friend I had went 'yo that's awesome' and changed over, even if they don't have a tech job.

Finally, back in the day/for many years, signal could default to normal MMS messaging, so the pitch was 'if they don't have signal, you can just text like normal'


do you have friends in plural?


I've gotten everyone from my in laws to my co workers on signal.

>I can share baby pictures without them being stored in google forever.

>We can organize whose bringing the coke without leaving a paper trail that lasts forever.


I DO

I HAVE 3

3 IS MORE THAN 1


"It remains unclear why so many major corporations persist in the belief that it is somehow acceptable to store so much sensitive customer data with so few security protections."

It's because there are almost no consequences to them if they lose the customer data, beyond a day or two of bad press. If they faced significant fines, fines that get worse the more sensitive the data is, then they'd have an incentive to do better.


No consequences, the cost can be great, and it can negatively impact productivity by introducing hurdles to legitimate uses. Those are immense pressures a soulless company will need to overcome to do the right thing.


Ongoing fallout from the Snowflake compromise; AT&T knew on Apr 19 but only disclosed now (Why does this not fall under SOX violation with the obligation to report timely to affected parties? It has affected AT&T's stock price -3% in early trading, so shouldn't it have also required SEC disclosure?)

- Records downloaded from Snowflake cloud platform

- AT&T will notify 110 million AT&T customers

- Compromised data includes customer phone numbers, metadata (but not actual content or timestamp of calls and messages), and location-related data. Not SSNs or DOBs. Mostly during a six-month period 5/1-10/31/2022, but more recent records from 1/2/2023 for a smaller but unspecified number of customers. TechCrunch report has more details including Mandiant's response, the name and suspects location of the cybercriminal group

I wonder if Congress manages to summon TikTok-like levels of anger on regulating this one.


> Snowflake blamed the data thefts on its customers for not using multi-factor authentication to secure their Snowflake accounts, a security feature that the cloud data giant did not enforce or require its customers to use.

So AT&T put all our call information somewhere and hid it probably behind a weak password with no additional factors. IMO that's actionable negligence and I hope they get sued to oblivion.


I'm more stunned that AT&T knew back on Apr 19 [UPDATE: Mar 20] yet feels it had neither an SOX violation or SEC obligation (share price effect) to notify timely. Like, by Apr 22. Not three months later [UPDATE: 4 months later].

Remember the massive Yahoo 2014 hack which Yahoo management failed to notify its own users for 2 years?

If SOX violation only literally covers users' own passwords getting breached, but not 2FA or other passwords to access the same data, will Congress amend it urgently?

EDIT: apparently they're hiding behind the 3/20 disclosure [0] which is all they disclosed until [1],[2] today.

[0]: March 30, 2024 - "AT&T Addresses Recent Data Set Released on the Dark Web" https://about.att.com/story/2024/addressing-data-set-release...

> "AT&T has determined that AT&T data-specific fields were contained in a data set released on the dark web; source is still being assessed...

> "AT&T has launched a robust investigation supported by internal and external cybersecurity experts. Based on our preliminary analysis, the data set appears to be from 2019 or earlier [incorrect], impacting... approx 7.6m current and 65.4m former AT&T account holders"*

> "Currently, AT&T does not have evidence of unauthorized access to its systems resulting in exfiltration of the data set.... As of today, this incident has not had a material impact on AT&T’s operations."* [but did it have a material impact on the customers/ex-customers?!]

[1]: Jul 12, 2024 - "AT&T Addresses Recent Incidents Regarding Access to Data" https://about.att.com/pages/data-incident.html

[2]: Jul 12, 2024 - "AT&T Addresses Illegal Download of Customer Data" https://about.att.com/story/2024/addressing-illegal-download...

> "Based on our investigation, the compromised data includes files containing AT&T records of calls and texts of nearly all of customers of [AT&T’s cellular and (MVNOs) using AT&T’s wireless network], as well as AT&T’s landline customers who interacted with those cellular numbers between May 1, 2022 - October 31, 2022. The compromised data also includes records from January 2, 2023, for a very small number of customers. The records identify the telephone numbers an AT&T or MVNO cellular number interacted with during these periods. For a subset of records, one or more cell site identification number(s) associated with the interactions are also included."


Subsequent reporting reveals that the DOJ ordered two ~month-long "delay periods" in disclosure:

> The Justice Department determined on May 9 and again on June 5 that a delay in providing public disclosure was warranted, so the company is now timely filing the report.

> The company [AT&T] is working with law enforcement and believes at least one person has been apprehended, according to the filing. It does not expect the event to have a material impact on its financials.

MarketWatch: [https://www.marketwatch.com/story/at-ts-stock-slides-2-9-aft...]


According to CNN:

“The company said the US Department of Justice Department determined in May and in June that a delay in public disclosure was warranted. It’s not clear why that the US government requested that data be delayed. CNN has reached out to the Justice Department for comment.”


May 16 Dow Jones Industrial Average surpasses 40,000 points for the first time, before closing at 39,869.

public disclosure of a cataclysmic security breach in a darling of the stock market could have significant repercussions.


It definitely included SSNs for some of them.

Source: me. My data was included in the leak and it included my SSN. It’s been a cluster fuck of a cleanup.


My SIN number has been leaked no less than 4 times tied to basically every standard identifying question about me now, if that helps ease your worry.

I guess the new methodology is that a company cannot be sued if they just all leak data, that way nobody knows which one is responsible for your identity theft.


@dang Could I ask why this topic gets systematically penalized in the HN ranking? There have been 15 submissions so far, I assume partly because previous submissions are not shown on the main page so HN users keep re-submitting it. This topic is both newsworthy and high interest.

(I was going to link to the 14 other submissions but the list is too long and it'd just come across as obnoxious.)


The threads have probably tripped the flamewar detector. Certain amount of comments plus some other metrics will hide the thread from the front page.


At the moment this is #1 on the frontpage.


The new HN voting mechanism is broken imo. Useless posts and articles of low value make it to the frontpage but valuable ones get shadowed.


There's a new voting mechanism?


And where do we go to find out about these things? Is there a discussion space or something?


Nah


> In a statement, AT&T said that the stolen data contains phone numbers of both cellular and landline customers, as well as AT&T records of calls and text messages — such as who contacted who by phone or text — during a six-month period between May 1, 2022 and October 31, 2022.

AT&T customer? Prepare for phone calls / text messages from your most frequent contacts saying "I got stranded / I'm Officer Blahblahman helping your friend get home... please send gift card / venmo"

It's only metadata...


I just realized this is going to fvck my call blocking strategy up: now creditors will have a bank of known good numbers to spoof into my whitelist with! :^O


I guess everyone is going to learn what Snowden was worried about the hard way now. I imagine there's going to be extortion attempts over calls to abortion clinics etc.


Among other things. The data's mostly from May-Oct 2022.


This is another consequence of the surveillance state. The same data that can be used to surveil us by the government can be stolen by who-knows-who. We’d all (mostly) be far better off, IMO, if companies didn’t retain such records.


My wet dream would be a dump of all SMS or Meta or iMessage messages for a multiyear period for nearly 90% of users. Only when Normie Norman's private chats to his mistress and other little relationship trust disrupting secrets become uncensorably hosted on the darknet and freely searchable, only then will Normie Norman get a clue and install SimpleX/Briar/Cwtch/any other owner-free decentralized p2p chat.


Not unrealistic. I used to have a tail of all SMS texts running 24/7 and was required to grep for specific terms for certain agencies until they eventually had their own access. This was only SS7 based texts and was long before RCS existed. I could have saved it all to my workstation but knew better than to do that. Either way SS7 and text messages are very insecure.


While I share the sentiment, Normie Norman is not at fault. Meta and other BigCorps are the perpetrators and Norman the Victim.


I have to disagree. He is a fault. Ultimately, you are the only person who really should care about your own security. When you delegate that responsibility, you are still the one who made that choice.


I don’t think it’s fair to blame people for not understanding the subtleties of encrypted communication.

Everyone only has so much attention to give.


Having a mobile phone is necessary to securing employment, shelter and sustenance in many cases, yet somehow it’s an individuals fault for choosing to have a phone account when a pair of multibillion dollar companies breach that data through lax security practices?


True, but you have to admit once you really see Normie Norman you come to understand aristocracy.

At least I do anyway.


https://dwm.suckless.org/

> Because dwm is customized through editing its source code, it's pointless to make binary packages of it. This keeps its userbase small and elitist.


Not in the way of a narcissist trying to separate himself from the group, but to see that Norman is very much susceptible to cow-like behaviors you can leverage. That's what I mean by understanding aristocracy. Aristocrat : Rancher.


Yes but have you ever asked a dev if they actually need the 8 year old logs in some bucket?


Criminal charges need to be filed and class action lawsuit for fraudulent services for all the customers duped into renewing monthly services ignorant of the fact the service is not secure as plainly stated it must be in federal law.


It's ok everyone! Protecting our data is one of AT&T's top priorities.

> Protecting your data is one of our top priorities. We have confirmed the affected access point has been secured.

> We hold ourselves to a high standard and commit to delivering the experience that you deserve. We constantly evaluate and enhance our security to address changing cybersecurity threats and work to create a secure environment for you. We invest in our network’s security using a broad array of resources including people, capital, and innovative technology advancements.

I hope there's an enormous fine for this kind of negligence


The “fine” will consist of a class action lawsuit that will eventually (3-4 years later) be bargained down to 1/2 the original claim. Lawyers take their 25% (or whatever cut was negotiated) fee. Then the impacted customers (assuming they submitted all of the claim paperwork) get paid out a few dollars.


Not their fault. Snowflake was breached. And the data was with Snowflake.


Your contractor being breached means you were breached.


Snowflake wasn't breached. A Snowflake database belonging to AT&T was breached.


You are right apparently.

> hundreds of Snowflake customer credentials ... of staffers who have access to their employer’s Snowflake environment ... credentials available online linked to Snowflake environments suggests an ongoing risk to customers who have not yet changed their passwords or enabled MFA.


Snowflake was "breached" by AT&T users using the same password in Snowflake and another system that was breached.

This is just trivial pivoting done with some guesswork done fairly well.


That’s not how any shared responsibility model works


There may be no "good" telcos or big tech firms, but some are absolutely worse than others. AT&T is actively hostile in a way others aren't.


[flagged]


Public? not within our lifetimes. Available to hostile governments? More likely


A friendly government can turn hostile overnight.


[flagged]


Good advice whenever possible. I wish any of the companies I did business with would use Signal but they all use SMS only (like my doctors, my plumbers, the electric company, restaurants, etc etc)


[flagged]


If the news starts with "criminals stole..." and not with "poor security practices at XYZ resulted in..." then there's really no hope.


User: admin Password: password


Isnt this just a legally mandated api for all phone operators in the US?

Edward Snowden published several slide decks about it a few years ago, before he defected to Russia.


It doesn't appear to be, though it was speculated that it might be. Companies keep all that data in the hope of making money by mining it.


[flagged]


Why are you booing him? He's right!

> to forsake one cause, party, or nation for another often because of a change in ideology

I don't think he left because of a change in ideology.


he was not heading to russia. he's just trapped there


Of course. He accidentally tripped, fell, and landed in Sheremetyevo International Airport with a nice cushy job in the Russian government, with Russian citizenship and a nice estate worth 10s of millions of dollars, and clearly just accidentally mispoke when he swore allegiance to Russia. All the nsa secrets he took with him were irrelevant to that story, typical of any asylum seeker arriving anywhere.

lol, oops, I forgot how triggered some people get for calling it defecting.


What do you think would have happened to him if he had stayed here?

The last whistleblower the US government got to was imprisoned for seven years and identifies as a woman now.

Snowden would be crazy to come anywhere near the US.


The parent wasn't arguing he should come back but saying that "defected" is not the correct word. The correct phrase is probably "took asylum."


I'm not sure why gender identity is relevant. I agree that the punishments he would have gotten for whistleblowing justify him not staying in the US.


He didn't disagree with the need to leave for Snowden, he said it wasn't a defection.

Snowden hasn't defected the US anymore than the Dalai Lama has been deflected Tibet.

I guess "went into exile" would be the proper naming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: