I wish people would stop calling this a hack. It's not a hack, it was a cyberwarfare. I know someone who was working on the team trying to recover from this attack, and Sony Pictures is basically fucked. Their IT infrastructure has been utterly destroyed, meaning they can't even pay their employees, pay their vendors or take orders from customers. They don't even want to use computers anymore, people call and text now to avoid any sort of central infrastructure that can be hacked. They had to switch to all-manual processes, and it will take months or likely years before their infrastructure is back within some sort of semblance.
But by then Sony Pictures as an entity may no longer exist. From all of the emails being released ridiculing their own talent, to their employees having their privacy destroyed and financial accounts hacked, who could work for this company again?
The thing this attack does is raise the bar as to what to expect. The worst we had heard until now was credit cards being stolen for quick gains, maybe some business secrets being stolen. But in my mind it started with lulzsec a couple of years ago where the attitude was for anarchy and cyberwar. But if the new trend is for companies to get destroyed, then cybersecurity will go to the next level where every company has to assume if they get hacked, they will get destroyed, so it becomes probably even more critical than other business processes.
No, thanks. Hackers themselves should better know the right meaning of the word. Going with the flow? What was that about, this site has Hacker in the name if you didn't notice ;)
But is it still the "right" meaning of a word, if nearly everybody doesn't it use that way?
Language doesn't work by giving some people the authority to define what a word means, against the super-majority of users. Language changes through usage.
Yes, I'm all for using "hacker" the old-school way, but really doesn't help if the people whom I talk to misunderstand me.
Yeah, from talking to a couple friends who work at or with Sony Pictures, morale is insanely low. Their personal information all over the place and the worst of their company's dirty laundry exposed.
Having said that, SPE is too big a company to go away. They'll get through it, but badly damaged and for years to come.
That to me is what makes it more than a normal hack. It's not just some data released, it's a body blow to the company itself.
> to their employees having their privacy destroyed
I'm always shocked at how concerned Americans are about the big bad wolf that is "identity theft". Is it really as bad as everyone thinks?
In all honesty, when your info gets leaked, don't you just cancel every credit card and bank account, get a new SSN, change your drivers license number, change your phone number and a few other numbers and move on with your life?
Is it honestly much different than losing your wallet full of cards? (which must happen to tens of thousands of people in the developed world daily)
Does anyone have direct experience on what "identity theft" is actually like?
(Honest question, I'm not American and I don't understand why it's such a big deal)
It is virtually impossible get a new SSN number. Even if you can prove your identity is stolen and your SSN is being used frequently, you still can't get an SSN.
I'm a victim of identity theft, and it really sucks. It completely fucks up your credit, and the credit agencies really don't care about trying to correct it. It took me several years (admittedly a few extra years because I gave up until I bought a house) to get it corrected. And even now, the "identify verification" services that the 3 credit companies offer is fucking me because it contains old data from my identity thief, so I can't get some credit even though my credit is outstanding.
So, no, it is not as simple as cancelling your credit cards.
Thanks for the post - I've never had information from someone that it actually happened to.
So, did you actually have to pay lots of money for the crap the thief bought on your credit? or did you just wind up with a crappy credit rating but no debt?
And are you saying the extent of it is you wind up with bad credit for a really long time and that is what makes it so bad/scary/life altering?
In all honesty, why are people's lives tied so heavily to some credit score?
> In all honesty, when your info gets leaked, don't you just cancel every credit card and bank account, get a new SSN, change your drivers license number, change your phone number and a few other numbers and move on with your life?
That is an insane pain in the ass. In this day of age just changing a credit card number is a pain; now changing your phone number is a new level of pain in the ass. Not only that but it's hard to get a new SSN; I've known plenty of people who requested a new one due to their SSN leaking and unless you have proof of identity theft many of them were simply declined.
Driver license numbers you typically don't need to change however. Almost all drive license numbers are generated simply based on your name (granted this depends on the state) so most of the time you can generate someone's number, accurately, just by knowing their name or a few other details about them. They're not really used to much.
Identity theft can have far-reaching consequences and calling it a "pain in the ass," even an "insane pain in the ass," is barely scratching the surface of the experience.
Give me a break. Lifelock are in the business of selling you a product to protect against "identity theft".
By very definition, they are in the business of scaring you into buying their product. They are fear mongers. Their advice is not worth 0.02.
> Identity theft can have far-reaching consequences
The one person that has replied so far that it has happened to said it resulted in bad credit for a number of years. That's it.
What other "far-reaching consequences" are you talking about?
I agree that the lifelock link was not impressive, but in the second link (search for "My third topic") there are significant examples of identity theft that would have far-reaching consequence such as:
| "Victims are often scarred emotionally. They feel violated and helpless -- and very angry. I've heard people use the word "rape" to describe how they feel. "
| "A search of his SSN showed he was wanted for a crime in the Bay Area. He was transported from San Diego to San Francisco and put in jail. It took him 10 days before one of the officers believed him, took his fingerprints as he had requested all along, and realized they had the wrong person."
| "when the imposter is working under the victim's name and SSN, and the earnings show on the victim's Social Security Administration record. We learned of one such a case that had been going on for 10 years. The imposter obtained the victim's birth certificate, a public record in California. And even when the victim acquired a new SSN, the impersonator was able to obtain it shortly thereafter. Victims of employment fraud often must deal with the Internal Revenue Service because IRS records show they are under-reporting their wages."
No horse in the identity theft question, hasn't happened to me, but I did have seriously bad credit for a short while due to a cockup by my bank, exacerbated by difficulty passing identity checks due to moving house a lot. Some of the potential consequences:
* I almost ended up in a situation where I couldn't get paid or pay bills, because no bank in the country would let me open an account
* I was at risk of becoming homeless because my lease was expiring and you can't rent a flat without passing a credit check
* Even signing up for a mobile phone or home internet requires passing a credit check
So: No job, internet, or house. Pretty far-reaching!
Eh, I immigrated to the US, then later to Canada, and I experienced exactly the same thing because having no credit rating is the same as having the worst possible credit rating.
I got over it, and figured out how to move on with life.
How did you get over that? I was only able to get past those obstacles by fixing my credit history, which fortunately was relatively easy as it was a mistake by my bank.
I spent years on a "bonded credit card" with limit $1k (i.e. they held $1k of mine in case I ever defaulted on the card) and kept a healthy amount in my savings account, and always paid off my card in time, until they slowly started to trust me.
I also had to put down deposits to get phone, apartment, electricity, etc. etc.
You can't get a new SSN as far as I know. What you can do is get a SSN pin, which will authenticate you in the case that your SSN has been leaked.
The damage that is done to your credit may not be undone so easily unless you take up a lawyer who knows what he's doing. Unfortunately, a lot of people are financially ruined for many years due to these kinds of attacks. Imagine not being able to buy a house, car, or even an apartment because of this.
How will damage have been done to your credit? If they buy tons of stuff before you manage to cancel all your cards, or apply for tons of loans in your name before you manage to shut everything down?
Isn't that the same as if my credit card is stolen? These things are insured and all the charges get reversed and I move on with my life.... obviously a pain in the butt, though I don't think life altering.
No, you pretty much don't know what you're talking about.
When people's identities get stolen, they typically don't know for years, as was the case with me. So someone opens up a line of credit under your name, spends it all, and it goes into collection for years, and then when it comes time for you to apply for a house because you are expecting a child, you can't because all of a sudden you have terrible credit. Or, you get denied jobs because you don't know about it, and when your employer checks your credit score, they reject you based on the fact you have several tens or hundreds of thousands of dollars in unpaid debt.
So the problem doesn't sound so much to be the actual theft, it's more-so that you might not find out until years after the fact when your credit it destroyed.
Also, I realize that bad credit is a pain in the butt, though let's be honest, there are many, many worse things that could happen to you or your loved ones.
> then cybersecurity will go to the next level where every company has to assume if they get hacked, they will get destroyed, so it becomes probably even more critical than other business processes.
So then companies will finally start taking security seriously, that needs to be done right from day one, and not as a "hack" that they add after they've already reached a million subscribers?
If only that would happen, because clearly the Sony executive in charge of security thought the opposite of that, saying as much as it "didn't make sense" to secure the company too much.
Maybe, just maybe, post-Snowden and post-Sony hack, people and companies will start taking online security seriously, while at the same time, the main stakeholders of the Internet will make a solid effort to push everyone to more modern security systems that aren't based on stuff invented 20-30 years ago and are now unnecessarily complex + easy to hack.
>Maybe, just maybe, post-Snowden and post-Sony hack, people and companies will start taking online security seriously, while at the same time, the main stakeholders of the Internet will make a solid effort to push everyone to more modern security systems that aren't based on stuff invented 20-30 years ago and are now unnecessarily complex + easy to hack.
It was only a matter of time. To be honest I could easily understand why a 60 year old exec wouldn't take IT security seriously. Even though they use it everyday, the recent "hacks" have only been a minor inconvenience to the companies targeted. Only banks and "younger" tech companies seem to take security seriously, because they have money to lose. Target had $0 taken from them when they got hacked. However now thats its clear that almost every facet of the company flows through technology people will no be more serious about security. Now that execs can't get paid and they lose money its an issue.
The SPE hack shows that if security isn't taken seriously, hackers will literally bomb you back to the stone ages. I'm sure any competent security engineer is now seeing their paychecks rise and their opinions taken more seriously. I feel bad for those that worked at Sony (moreso the employees - its not impossible that year from now their credit will be destroyed because some skiddy decided to use their SSNs to buy a hummer after the fact).
Even basic stuff - there doesn't even seem to be a post-hack disaster plan here too. Who knows how long it will take SPE to recover.
The difference between a hack and cyber warfare is this: cyber warfare results in death or substantially loss to property. Here's a link that has a cool presentation about the differences to clear up any confusion:
Just because the damage to the company was huge, it doesn't mean it was anything else than an ordinary hack in a less-than-secure computer network. I can totally imagine hackers deleting data/disabling systems/leaking emails just for the lulz!
> But by then Sony Pictures as an entity may no longer exist.
Is that a bad thing? I'm asking honestly.
Why would you think that Sony Pictures ceasing to exist is a bad thing?
Many other companies that made bad decisions ceased to exist before thus releasing their employees so they could work at places that don't make that many bad decisions.
Because in this case the company isn't failing because it made some critical error, but because a group of criminals essentially broke in and vandalized the place? Is it a bad thing when a company that's shot up and robbed by thieves disbands? Clearly they should've had better protections in place.
I don't really like Sony, but the employees don't deserve to be out of a job just because their employer got breached.
Did it occur to you that for a moment scotty79 was truly asking an honest question and didn't think of it from the angle of the effect on employees and was probably agreeing with you up until you called him an asshole?
> oh shit, a lot of people are truly fucked right now if it gets any worse than this
Seems weird that in America people are so opposed to a socialism (which helps the individuals when a badly managed company screws up and dies, ensuring that nobody is ever "truly fucked"); but a government which gives money to corrupt managers to keep the corporation alive is fine (because "think of the individuals!")...
If you want corporations to be run on a survival-of-the-fittest basis (capitalism wants that, right?), surely letting the company die and having the government step in to avoid undue suffering for employees is the right thing to do?
(Though in this specific case, one could argue that the punishment of company death is disproportionate given the crime of allowing one security breach; or would capitalism say that that's an acceptable and necessary consequence to set an example for other companies in the same position?)
Dude seriously? I think you've lost the plot… Sony is a company that does a million things, and nothing that I'm aware of that would deserve the term 'atrocity', unless maybe you count some of their movie plots.
What company do you work for so that we can thoroughly vet that you are not a part of any global conspiracies? If we do find something that could be a problem, you will be tried for crimes against humanity as a lowly employee trying to make ends meet.
Everything you responded to me with btw, has nothing to do with Sony. Your strawman, burn it.
Sure, that sucks for the people involved, but isn't this how Capitalism is supposed to work? Bad companies fail and go bankrupt, and new better adapted ones rise up and take their place.
Security is the ultimate technical debt. Most companies get super deep into this kind of debt, it's just not visible until they "get unlucky". I think it would be a good thing if companies, the industry, and everyday people were more aware of this concept (even if not with this terminology).
I would guess that almost all companies have significant security deficiencies, which at least some of their engineers are aware of. It's just never a priority to work on, unless they've just been hacked. Their priority is always on product and marketing, and they have to compete with all the other companies not spending real engineering effort on security, after all. They're all just gambling on not being hacked to bad.
I'm doing some conjecturing here, but could hacking in the 21st century and cyber warfare be the new virtual/online 'rioting'?
For example, Sony aides the US army since it is affiliated with Rockwell Collins and Future Combat Systems, so it would make sense if 'socialists', leftists, progressives, and anarchists seek to destroy conglomerates where they get a large chunk of their profit, disabling any funding of the US army, since the majority government's ideology does not align with theirs. Whether these 'minorities' are citizens of the targeted country or not, this group of intelligent people seem to be growing increasingly disenchanted by how things are being played out in the world. We see it among hackers in North Korea and the Middle East. Increasingly, Japan's racist past has been brought up in the media lately, yet Kazuo Hirai allowed a racist movie for the sake of comedy to be funded. (For those not in the know, Japan is going through a period of negationism with its politics being controlled by ultra conservatives).
Post-Edward Snowden, it seems like there is very little hackers, developers, etc., can do that is not political or at least aiding or taking away from some socioeconomic issue. It's sad but it seems we are all somehow weakening the System or helping it. Sure, I'm not saying we should all worry about it. These are just the observations of an amateur.
Jason Spaltro, then executive director of information security at Sony Pictures, called it a "valid business decision to accept the risk of a security breach" in a 2007 interview with CIO Magazine, adding he would not invest "$10 million to avoid a possible $1 million loss."
So basically their thinking was that getting hacked was just the cost of doing business. Of course, they are now discovering that the cost of a really serious hack is much higher than they thought it was.
Which makes me wonder if, at some point, we're going to have to have some kind of controls on who can legally hold personally identifiable information on their systems and who cannot. Right now pretty much anybody can, regardless of whether they're competent enough to protect it. And as a result there's a huge volume of critical information out there stored on systems that are either poorly secured or whose admins have decided, like Sony's, that the ROI on real security is too poor to justify having any, which creates a target-rich environment for hackers to take advantage of.
Attaching serious liability to holding data on systems that aren't secured, or requiring proof of competency/minimum-acceptable-effort in order to avoid such, might shift the ROI calculation on security enough to convince even idiots that it's worthwhile; or, at least, that it's better to outsource holding the data to someone who knows what they're doing (and is willing to back that up by accepting liability) than it is to keep everything in-house on a dusty Windows NT4 box under someone's desk and just cross their fingers.
We already sort of do this sort of thing for financial information, via PCI; but the universe of "data that could do serious damage if it got loose" is much larger than that which PCI covers, as this hack demonstrates. So I wonder how many of these types of giant hacks people will be willing to accept before they start calling for some kind of protection.
Although he miscalculated the cost here, the general point is somewhat valid I think. There's always going to be a trade-off between security and convenience / productivity / revenue / values / etc. And it's a good thing to think about that trade-off, because not thinking about the trade-off justifies things like mandatory strip searches of everyone boarding a plane (because terrorism!).
The bigger issue (which I think you're getting at) is whether this trade-off accurately accounts for externalities. Even if a security vulnerability only costs Sony $1 million, it could cost their customers, credit card companies, and any number of third parties a lot more than that.
An IT exec at a Fortune 100 company once described it as, "With 25K, someone can get into our network. We just need to make than # higher than most of the other companies out there. We can never be 100% sure as long as it's real people involved."
It's a valid point. I think it assumes that there's a time component involved too. The company did seem fairly on it's game on the time, and not one to flaunt controversy. But most hacks never get made public either.
One problem with that is that the hackers that you really need to worry about are ones who are coming after you: competitors trying to steal IP, business rivals investigating your deals and so on. These high profile attacks like the Sony attack are embarrassing, but the real business damage is probably done in other attacks that no one ever hears about.
The civil legal system is supposed to be the correct way for an injured party to recover losses from the party causing the injury. Sadly, this doesn't always work as effectively as it should. Probably because lawyers.
> The civil legal system is supposed to be the correct way for an injured party to recover losses from the party causing the injury. Sadly, this doesn't always work as effectively as it should. Probably because lawyers.
Even if you assume away any problems caused through either ill intent or simple not being perfectly omniscient by lawyers, judges, and jurors (all of whom can cause failures in the civil legal system), the impossibility of that system being a complete mechanism for recompense lies, among other places, in the fact that it is possible for the liable party to have caused harms to the injured party that the liable party is not capable of compensating. The harms a person is capable of doing are not limited by their capacity to pay -- someone with no assets can do an injury causing millions of dollars of property damage.
It's kind of unfair to target lawyers in cases like this. The real problem is combination of a patchwork legal system and powerful people's willingness to exploit it for personal gain.
Lawyers are just the _means_ by which those people exploit the system. They're also the means by which people who are wronged can get justice, so they're exploitation agnostic.
Right. Which, in the case of a massive data breach, would tend to cause them to line up behind the potential plaintiffs, not the defendant. In reality, in cases like this, both sides will have top notch representation.
(The problem, of course, is that there are cases where the stakes are non-monetary, or the plaintiff places a much greater value on a relatively small amount of money than a lawyer would...but I don't think that's really what we're talking about here.)
Also because companies try to remove all liability of their own when you sign the contract with them. Didn't both Sony and Microsoft update their EULAs so that you can't sue them - like at all? Sure, stuff like this may not hold in Court. But it may. Or at least it makes it extremely difficult to have someone sue them.
From the article linked above it seems like people at Sony were exploiting the security budget, they had eight managers for three analysts: ”documents leaked after the recent attack show the company had just 11 people assigned to its information security team: ‘Three information security analysts are overseen by three managers, three directors, one executive director and one senior-vice president.’”
I think the only thing you can take from that is that Jason Spaltro is an idiot. When you have a billion dollar company, there's no such thing as a million dollar loss. Even the Sony Exchange address book is probably worth millions to the right people.
> Spaltro offers a hypothetical example of a company that relies on legacy systems to store and manage credit card transactions for its customers.
The full quote:
> Although Spaltro declines to talk about Sony’s security practices, he says that while Sony Online Entertainment is fully compliant, every company weighs the cost of protecting personal data with the cost of what it would take to notify customers if a breach occurred. Spaltro offers a hypothetical example of a company that relies on legacy systems to store and manage credit card transactions for its customers. The cost to harden the legacy database against a possible intrusion could come to $10 million, he says. The cost to notify customers in case of a breach might be $1 million. With those figures, says Spaltro, “it’s a valid business decision to accept the risk” of a security breach. “I will not invest $10 million to avoid a possible $1 million loss,” he suggests.
Not an idiot: Spending $10M on insurance to offset a risk of $1M.
Idiot: Believing that the only cost of total loss of control over personal data is the cost of notifying people you lost control over their data.
With a bit more thinking he could know that 'notifying people' was practically "free" since there are a number of interested parties who will do that (news media, credit card agencies, federal agencies).
But what it really sounds like from that quote is that he has constructed a model in his head of what sort of data they have control over (or not) which is not very representative of what data they actually had, and control over.
And he's wrong. The loss isn't just 1 million, and calling a certainty a risk is just baffling. If we assume the legacy system actually needs 10 million in enhancements to be adequately protected, I think it's safe to say it's a train wreck security wise. For that, and other reasons (Sony assume they will get owned) then it's afe to assume the system is going to get compromised. So its not a risk anymore, it's a certainty.
So with that said, if the cost is report is 1 million, and the system is vulnerable, and they expect to get compromised then it's not just going to cost them just 1 million. It's going to cost them at least 11 million, or more.
Further, the "don't dix, just accept the risk" costs also won't just be 1 million, it's going to be 1 million every time it's compromised until they fix it. Since he's only accounting for the cost to report the breach, and the box is STILL owned and STILL vulnerable, it's going to keep getting compromised. So its 1 million times each event, which is a potentially infinite since nothing's changed to stop the next event.
To stop that infinite cycle, he has to pay to fix it, or keep paying to report. Either way he's paying more than 1 million. And since Sony I'm sure isn't that stupid, let's assume someone with common sense will order him to stop accepting the risk and the bleeding, then his real risk cost is 1at lesst 1 million to report and fix once that decision is made (and fixing is probably going to cost more now that the original 10 million since the system is now compromised too).
I'm not sure if he realizes this or not, or if it was a clumsy attempt to explain risk, but at least in the statements reported that's a critical flaw in his reasoning. And that's why what he said isn't just unconvincing, it's wrong.
If that's how Sony makes risk management decisions I have a Pinto to sell them.
> Jason Spaltro, then executive director of information security at Sony Pictures, called it a "valid business decision to accept the risk of a security breach" in a 2007 interview with CIO Magazine, adding he would not invest "$10 million to avoid a possible $1 million loss."
I have seen people promoted to Information Security Officer who did not know the first thing about IT or programming, let alone hacking or things like APTs, server security or the top ten list of exploits.
I believe he was promoted because they wanted him away from his previous job, managing a software development department.
I think what people are hinting at, but not explicitly saying is the $10M vs. $1M tradeoff between hardening a database and notifying users is a very narrow perspective. They're not weighing the possible monetary losses to their users or possible damage to their reputation.
And that's one of the many things that makes security hard: effectively weighing impacts against remediation costs (even measuring impacts is really hard). It appears Sony did it badly. But I agree with the statement that it doesn't make sense to spend $10m mitigating against a possible $1m loss.
But that's not the question. No-one's suggesting that Sony considered this specific scenario and arrived at loss impact of $1m, are they?
Hindsight is a wonderful thing.
An interesting question is this: You're the CISO of a major Hollywood studio. You have a budget of $xx million. Smart, well-resourced people are determined to destroy your company. What do you do?
I wonder if he was thinking in terms of a DVD getting leaked as the worst possible thing that could happen. Sure, there's a loss of sales and movie viewership, but in the eyes of a studio a pirated product is a drop in the bucket these days.
You could construe that he would be instructing his employees to act illegally if he was a UK CIO.
I don't think you could construe that at all. While everyone is making great hay about his statement, the truth is that every organization makes the compromises that he mentions. While perhaps duplicating what andrewfong said, if security is supreme-
-there is zero worker mobility. Want to work at home, or bring the laptop on the flight? Security threat.
-there is zero internet connectivity. Want to browse at your desk? No chance.
-email attachments? Filesharing? Heck no.
-users have absolutely no rights on their workstation beyond the most rudimentary.
-application sets are extremely restrictive and vetted.
-Want to work in the conference room? Call the security center to configure the IPSec, MAC and port filtering security to allow you to move your locked down desktop.
And on and on. While quite a few on here seem to present the notion that they comply with all best practices, extraordinarily few do (even the NSA doesn't). And those best practices cost in both productivity, worker enjoyment, and direct expenditures.
It is never, and has never been, as easy as security versus no security. Against a dedicated, resourced foe, just about anyone and everyone is vulnerable.
Sorry, no. He cannot under UK law "accept the risk of a security breach" unless a UK Judge deemed it unreasonable to spend "$10 million to avoid a possible $1 million loss."
A judge may well agree that it is unreasonable but he cannot use the financial loss to the company as the only metric to make that decision.
>So basically their thinking was that getting hacked was just the cost of doing business. Of course, they are now discovering that the cost of a really serious hack is much higher than they thought it was.
The HN hive mind victim blaming at it's best.
If you bother to read the proper statement it's specifically about one 'hypothetical' database getting breached.
"The cost to harden the legacy database against a possible intrusion could come to $10 million, he says. The cost to notify customers in case of a breach might be $1 million. With those figures, says Spaltro, “it’s a valid business decision to accept the risk” of a security breach. “I will not invest $10 million to avoid a possible $1 million loss,” he suggests."
And it's a fair statement. You can spend ever spare dollar on IT security and still not be 100% secure, just poor with a crippling system that grinds productivity to a halt. You can see this with the auditor living in gaga land asking employees have passwords with "random letters, numbers and symbols"
It is about risk analysis as he had the courage to say out loud and it is what everyone in real world IT security knows.
His statement was about a single hypothetical database server that would cost more to secure that it was worth.
Budgets are finite. That's a fact of life. The trick is balancing the risks and hardening the important things so that, on the balance, you have the most security possible for that budget.
Not spending as much money hardening the (hypothetical) legacy DB with a noncritical data of some sort, so that you have more budget to harden the financials database; that is the right decision.
Of course, it doesn't look like that balance was struck effectively very often at Sony Pictures, but the theory is sound :)
I think that potential damage to a company's reputation and possible harm to customers is worth counting. If it's something you'd have to notify customers about... It just be somewhat sensitive data, not public information?
where he basically advocates cutting corners and doing the bare minimum. It shouldn't be the security executive's job to save a buck and brag about it, yet this is probably the precise mentality that allowed him to move up the ladder.
I don't know how it works in other countries, but in Spain you're liable for any leak of 3rd party's personal information you hold: National ID, Driving license, name, phone number, address, etc: https://es.wikipedia.org/wiki/Ley_Orgánica_de_Protección_de_... and it is actually enforced by the Spanish Agency of Data Protection.
Computer security in the business world consists of security groups in AD and a password policy based on 8 characters and asking staff to nicely not share their passwords. It's a huge joke and will continue to be because the technology isn't there and senior managers are not going to jail when these hacks happen and it comes out that they were incompetent and thought they knew what they were doing because they were nominated for a CISSP.
I'm about to walk away from the corporate IT world because I'm over beating my head against the desk while people who literally can not turn a computer on without a 1 hour demonstration tell me they know better.
>Attaching serious liability to holding data on systems that aren't secured
That was the intent with Massachusetts PII law 201 CMR 17 which stipulates a $5000 fine per incident. I've always heard this interpreted as "per user record exposed" though apparently that's in dispute. $5k per exposed user could add up, but so far I don't think the law has had much impact. (btw it applies to anyone holding MA resident's data wherever the company is located / data is stored). California has a similar (earlier) law I believe. Perhaps we need this to be adopted at the Federal level.
I would love to see such regulation appear. The recent chip and pin law also states that the one liable is the one with the least security. I also think that's a brilliant way to do it. Don't want to be liable? Then you better have the more modern security system.
I also like the idea of outsourcing the security/storing of data. Then we'd have companies that become real experts at that, and all the "idiots" would default to using those companies, instead of doing it themselves, poorly.
Why do you have a problem with the comment? Do you spent $100 on a lock to protect a $10 object? Of course it is just a business decision. The problem is the cost is much much greater than they thought.
>executive director of information security at Sony Pictures
>
>
>So basically their thinking was that getting hacked was just the cost of doing business
This isnt their thinking. Industry wide belief is that CIOs role is minimizing financial liability, NOT maximizing security.
Where professors are >50 year old bumbling fools proudly proclaiming they know f'all about technology, and whole thing revolves around staying compliant (on paper) with federal law (SOX etc) and industry standards (PCI).
Those "people", they call themselves teachers/professionals in the field, will tell you that Information Security is ALL about covering your ass and shifting blame to a third party. All three courses are about getting a slip of paper that says you delegated responsibility clearing your company from any lawful obligations in case of a breach. Not a single peep about technical aspects or real security of data/infrastructure. Its all about getting that "we paid company X to do it for us" waiver.
There was (deleted now) a huge thread on Coursera forum titled Disappointment covering this topic and demonstrating total lack of professionalism or any real world knowledge. Whole UW program is a government funds skimming scheme. Just look at this garbage:
"Center of Information Assurance and Cybersecurity at the University of Washington, designated by the NSA/DHS as a Center for Academic Excellence in Information Assurance Education and Research"
I highly encourage you to check out those courses (I think there is preview available for old lectures) if you are in need of a laugh, or reality check concerning state of CIO education."
Do you really think the airing employees private info was legitimatized by Sony's bad karma? For something most of the employees likely had nothing to do with?
It is relatively simple to take the argument about responsibility to Nuremburg at the level of fidelity an internet argument requires.
Lets say that after Nuremburg, we are not willing to completely absolve low-level players because we realize after the Holocaust that widespread complicity is the only way such an atrocity can happen.
All the same, we aren't assigning the same level of responsibility to the prison guard that we do to Goebbels, etc. We will apply it proportionately.
If you take the Sony crimes and relate them to the Holocaust -- for nihilistic internet argument purposes -- you end up with some small fraction.
Following the Nuremberg methodology, you take that small fraction and apply it 100% to decisionmakers and proportionately to low-level players. The resulting blame is small enough relative to their breach of privacy that the cyber attackers are in the wrong.
Therefore, taking this argument back to Nuremberg does not serve the purpose of justifying the middle managers/analysts breaches of privacy.
No, in Nuremberg the war was over. The enemy had been defeated. It was time to move on. But people were being prosecuted (and still are) as a spectator sport.
Sony on the other had is a fully functioning company that is by far not defeated. Each employee needs to take fully responsibility for the company they work for.
Sony is a pretty terrible company. That doesn't mean that their rank-and-file employees deserve what happened, but you don't have to search very far to come to the realization that the Sony conglomerate is one of the most anti-consumer organizations in the business world.
That's... Wow. Specially the one about the preinstalled rootkit. My present batch of gizmos (Laptop, smartphone, e-reader...) were all Sony becouse all seemed the best I could buy at the moment of the purchase. I will seriously consider buying anything at all from them the next time.
What should the liability for the hackers be? Do people still believe that copying some files is a victimless crime? Do people still believe that "unauthorized access" is impossible because accessing a computer by definition implies authorization otherwise it wouldn't happen? What's the appropriate punishment? A public apology? Restitution? 100 hours community service teaching kids to program?
I have no sympathy for Sony, but I feel deeply for the honest people who are just trying to earn a living working there. More so after reading about how some of them were openly complaining internally about things they saw as problems - many of these people were trying to make the company better from within.
I'm torn. On one hand, I want to see Sony the company suffer, but it still feels unfair to attack and expose the people who work there - the folks who are just trying to pay their bills.
I wonder at what point it becomes necessary for a person to judge the risk that joining a company with a questionable history will have on their privacy and personal security. I think the sad fact is though, in all likelihood Sony didn't have especially poor security, they likely had moderate to good security but failed when facing a persistent threat. I get the impression many other companies (large to small) would have failed a lot sooner.
Is there anything employees can do to protect themselves from these kinds of breaches? Is the answer to sue our employers who fail to protect this info?
> I have no sympathy for Sony, but I feel deeply for the honest people who are just trying to earn a living working there.
I don't understand this statement. Sony is not a person, it's a company comprised of the people who are earning a living working there. This is true from CEO down to the temp workers. Did you mean just the executive team? The employee roster in any company is a state that is constantly shifting as people leave and join new places. There has been a lot change in who works at sony since 2005. Do you have no sympathy for the people who did not leave after 2005? Do you have no sympathy for new executives? Like what are you even saying? Maybe you just dislike all executives and managers.
I suspect what he is saying is that people who probably had no visibility into the state of the Sony's security and certainly had no ability to influence it are unfortunate victims here. While security is difficult to measure and therefore difficult to manage and improve, it remains the responsibility of executives to allocate resources against that problem and it is they who ultimately bear the majority of the blame when the security posture falls short.
IMO the higher up the chain you are, the more responsibility you have to secure the systems you are responsible for.
I feel like it's a perceived lack of accountability (from the perspective of the hackers) of the executive team that leads to these kinds of leaks. When they feel they aren't seeing justice - as defined by them - then I think they're more motivated to do something about it themselves.
What do you mean the people? Do you mean the brain cell that was responsible for their decision to work there? Or the one that was responsible for their decision to work at all? Or the one that...
Of course you can have sympathy for a company. What is a person if not a bunch of semi-autonomous systems working together to make decisions and perform actions? What is a company if not the same thing?
There are the people who work at Sony. There is Sony as an organization. They are not the same thing, but they can each be described independently. The decisions and actions of a company is a product not of individual humans but of a group of humans working in conjunction to do more than one could do by itself. You can't blame the person. That's not fair. You have to blame the organization.
However, blame is exactly the job role of the executive team, since a company can't be fired. You blame Sony. You fire their CxO.
> The decisions and actions of a company is a product not of individual humans but of a group of humans working in conjunction to do more than one could do by itself. You can't blame the person. That's not fair. You have to blame the organization.
So we can never again blame a CEO (or other executive) for the actions of the company? If the organization is a separable entity from the people it must be so?
Strongly equating a company and its employees only really makes sense if the employees also control the company, and therefore in some significant sense the company's decisions and interests can be said to be the aggregate of its employees' decisions and interests. Except maybe in a workers' cooperative, that's not usually the case.
I think it's unfair to punish someone for a decision they had no part of. How many of the people who had their info leaked had an a part in making the decision to go with the security precautions they chose, or even were aware of them.
The higher up the chain of command you are the more accountable you should be, but harming the poor folks at the bottom isn't doing anyone any favours.
> I'm torn. On one hand, I want to see Sony the company suffer, but it still feels unfair to attack and expose the people who work there - the folks who are just trying to pay their bills.
Companies can and should die. That's their secret superpower, that they can die without destroying and killing of all they consisted of.
That's why economy works. Companies that make bad decisions die and resources (human and the other kind) are absorbed by other entities. Sure it might hurt, but keeping alive company that makes bad decisions causes much more hurt to the world. Pissing off lots of competent people to the point that some of them did what they did is not a good decision for a company to make in XXI century.
> That we live in the world where we aren't sure if any given cyberattack is the work of a foreign government or a couple of guys should be scary to us all.
I am more impressed with another of his oracles: the 1945 essay “You and the Atomic Bomb,” in which Orwell more or less anticipates the geopolitical shape of the world for the next half-century. “Ages in which the dominant weapon is expensive or difficult to make,” he explains, “will tend to be ages of despotism, whereas when the dominant weapon is cheap and simple, the common people have a chance ... A complex weapon makes the strong stronger, while a simple weapon — so long as there is no answer to it — gives claws to the weak.”
Hacking is a cheap weapon with no answer to it, what was proven over the recent years and therefore it is democratizing force for good against the tyranny of the powerful.
Hacking does have answers (things like DDoS are harder to defend against though).
However, those answers tend to require significant resource investment (formal verification, use of software engineering processes that are cost-prohibitive, etc.). The evil despots and states are much more likely to be able to bring these resources to bear if need be than the common people.
After all, think back to when cheap weapons were available to all about equally and there weren't much better weapons available even to the rich... it was awful. You couldn't even go from one city to the next without being preyed on by "highwaymen".
Seeing the same thing happen on the Internet (where skilled hackers and not "common people" are really in charge) doesn't seem to be as uplifting to me as it seems to be for you.
> They just showed up. They sent the same banal workplace emails you send every day
Yeah, no, I have a bit of a problem with this. There is a lot of stuff coming out of this hack that should never, ever have been on corporate IT systems in the first place. Stuff that doesn't come out of regular HR data where people should have a reasonably expectation of privacy and security, but stuff people have put there themselves.
And I don't think this should ever be considered normal behavior, to use corporate IT systems to store such private data.
Yes, these people are victims, but I think it sends the wrong message to say that their own role in this was completely normal behavior. It should be possible to be critical of this without drifting into blaming the victim territory, and I'm kind of missing that from the whole Sony hack discussion.
I had an internship at a company where all email was scanned, labeled, and encrypted based on the type of content in the email. Access to email was restricted when not using the corporate network. The information policy was strictly enforced; no personal emails on the network, except when work is affected (ex: "my relative is getting married, so I'll miss the meeting"). The net effect of the company's measures would have protected employees from the situation Sony Pictures employees find themselves in; unfortunately most companies are not so stringent, so they are vulnerable like Sony Pictures.
I think that, yes, people should not expect privacy on a corporate network, and yes, people should distinguish corporate from private email. However, the employee's attitude is impacted by the corporate attitude, and many companies are not nearly as strict as they should be with information policy.
Also, am I blind or can ZDnet honestly not tell the difference between their headline saying that Amazon denied AWS was being used by Sony in a DDOS, and Amazon's actual statement saying that such an attack was "not currently happening" ?
I am not currently asleep. That does not mean that I was not sleeping earlier, or that I have never, ever slept.
Thanks for the links; I too couldn't find that in the cited article. I read the point 3 sentence and was just gob-smacked, and wanted to see if my model of reality was broken.
I'm pretty sure Sony pictures had nothing to do with running Linux on your PS3, but regardless, many people being hurt here are just regular employees, not executives who made the decisions you/we don't like.
I think you misunderstand the organization of Sony. Sony Pictures has nothing to do with video games, being the hardware or the games themselves. It is just an american subsidiary producing movies.
PS: the majority of the income comes from the games, because developers must pay Sony to publish a game on their platform.
Sony also develop and sells their own games and services.
Sony Pictures: part of the same org.
i am just making assumptions that because content is where the money is, they call the shots.
this is my peeve with the entertainment aspect of sony:
Sony Pictures had nothing to do with any of that. Even if this kind of retribution were justified, this is like attacking the US Postal Service because you're angry at something the army did. The only thing they share is the people at the very top. Everybody that will actually be affected is innocent.
What's most surprising to me about this attack, is the journalists who are digging through this illegally gotten material and then creating stories and spreading the details even further.
It's interesting that this is treated differently the the celebrity photo leak earlier this year. In both cases personal, private information has been leaked (photos, emails) but for some reason it's ok to post the emails on legitimate news websites.
I would argue that this is a very bad analogy. One is only a photo of yourself (okay, in a setting you might not like, but still just a picture), while the other is something someone can use to fuck you up: put you in financial trouble, break your credit, your name (see other posts in this thread). This is in no way like leaked photos.
I agree both might be troublesome, but the level of this leak is incredibly worse.
Your expectations of journalists must be very high to be surprised by this sort of thing...
Look at the reporting of any breaking news, especially emergency situations: the talking heads will repeat any rumor or scrap of news like it's truth, just to keep eyeballs glued to their station/site/Twitter/etc. And afterwards there is no accountability whatsoever.
I agree with you that it's not the right thing to do.
The messages from GOP seem more like desperate attempts to not sound like English first kids. It's awkward in places where maybe a translator service would use the wrong word, but with language structure for sentences that would not come from such a tool.
Someone wouldn't structure sentences as they do AND make word mistakes like they do. The structure is too "good" to make mistakes on terminology. It's like a bad Chinese accent in a cartoon.
I don't see how this sort of thing can be realistically prevented with today's software. I just don't. Yes, the damage can be limited through better IT practices, but at some point everyone needs to have access to their relatively recent email and access to their files germane to their projects, and that need won't go away. And as long as our laughably porous networks and operating systems and application software can be penetrated at will by a targeted attack, those emails and files will be vulnerable.
We need to start over. From the kernel on up. Until then, all we can hope for is band-aids like Qubes. And after we get there, we'll mostly have the hardware to fear. Not sure how addressable that part is, but it would be better than where we are now.
We need to start over. From the language that the kernel is written in, for starters.
Doing it all in C sounds like a rather stupid idea, because we tried that already.
I have high hopes that Rust, and the language that it will inspire, will be a big help here.
(Yes, I know, you probably can't write the whole kernel in a memory-safe language, but I guess at least 95% of the current linux kernel could be. Probably 99% or more).
There need to be wholesale architectural changes as well. If there's a convincing argument that a monolithic kernel can be made secure, I haven't seen it. Device drivers are simply too dangerous to be running in kernel space, for example, along with much of the rest of what we usually get in the kernel (filesystems, TCP and UDP stacks, etc).
No binary should ever be run without the kernel imposing an appropriately restrictive set of policies on what it can do.
And there needs to be more sophistication in approaching how applications work. For example, a good argument can be made that binaries shouldn't be allowed to open their own file handles under most conditions. Your word processor shouldn't be the one putting up an open dialog and opening anything you point to in your directory tree because that's how it can be talked into dumping your entire home directory to a foreign server. It should have to ask the OS to put that dialog up, and then accept the file handle(s) it's handed by the OS.
And then there are protocols that are broken by design, which we haven't even touched on.
Everything, from top to bottom, needs to be rewritten with security in mind. Not sure how that could realistically happen, unfortunately.
Properly airgapped systems are really tedious to work with. You can only install software from physical media. In a world of digital distribution that's a huge inconvenience; what do you do when you want an update to Adobe Premiere?
And if you don't airgap all the editing systems you might as well not bother.
An unreleased film is valuable property, but it's not a matter of life or death. Ease of distribution probably outweighs the small possibility of a hack if you're doing security right.
And if you're not, well, you're also not going to consider measures like an air gap.
There are a lot of other companies that need access to those films. For example, the people doing subtitles, or printing the film, or doing digital distribution. I doubt the systems that stored these movies were hacked directly, but once the password files leaked out from the main attack they were simply downloaded from their distribution servers. That's why there haven't been any leaked work in progress films, because they are being worked on from airgapped networks. It's only when a film is close to release that it is put on a system connected to the internet so that it can be distributed to partner companies.
I wonder. Dailies get passed around to execs, so you'd think those would have been stolen as well. Maybe Sony has a system for that that's airgapped, but I kinda doubt it.
No one should have any expectation of privacy for anything they put on the Internet. Privacy is something you have to work for, and unless you have those controls (PGP, encryption at rest) in place, privacy doesn't exist. Even if you have those things in place, your privacy is only as strong as the privacy of the people you are communicating with.
We should work towards strong privacy as a default, but the service delivering targeted ads is not incentivized to protect your privacy.
Some of this stuff, sure, it's not necessary at work so you are taking a risk of it becoming public for no reason. But some of it is necessary in a corporate office. I have emailed HR questions about medical insurance or 401ks that I wouldn't want to post on my Twitter. Identifying the beneficiary of my life insurance policy doesn't seem like something I want many people to know, yet HR has to. What else are you supposed to do than use the internal systems? You have to trust IT/Security with that stuff.
> No one should have any expectation of privacy for anything they put on the Internet.
Right, except Joanna Public does have this expectation and continues to do so because of manifestly poor public education about the risks of the internet. We should literally have lessons about this stuff in schools.
And as another commenter stated, these things were not "online", they were "on a computer". If I can have no expectation of privacy for things I put on a computer then that is going to lead to no expectation of privacy under any circumstances. This stuff is going to have profound social consequences and something has to change.
I did receive education on the risks of the internet, starting in elementary school (around 4th grade). In grade school we where given school-district emails, which we used to communicate with teachers, get information from the school district, and could use to talk to other students.
I still remember when we where first given these, because our teacher gave a very long warning about using the internet. I remember the exact phrase "whatever you put on the internet will be there forever, and you can never take it down. Think about that before doing anything."
This was about a decade ago, but I grew up in Redmond, Washington, so my education is probably very different from other peoples (at least regarding computers).
Nah, that's the same education we got in 1995 in high school, too. What happened in the interim is that there were a couple of dot-com bubbles where companies encouraged you to share recklessly because they built business models on it. Ignore all that safety stuff. So I see all these services pop up that ask you to do things we were always taught not to do, it created intense feelings of unease in me, but everybody did it anyway. And then nude pics get hacked and shared, people get harassed/fired because of a global Facebook tweet saying something stupid, and everybody acts like it's a big fucking surprise when all they had to do was follow the rules. But the rules conflicted with the merger of narcissism and corporate profits.
And a lot of this wasn't even "online", where online is defined as "internet-facing". Once someone it in your network, it doesn't matter what is internet-facing anymore.
If we're talking about the general public here, I think we're going to want to come up with more nuanced guidance than "Don't store private data on any internet-connected computer, ever."
Maybe we're talking past each other here-- there's a significant qualitative difference between the risks of uploading data to a globally-routable server somewhere, and storing that same data on a firewalled, password-protected system on an internal network which could theoretically be owned from outside.
The former is obviously unsolvable, and "don't transmit anything you don't want shared" is good advice. But surely we want to arrive at a world where typical end users can feel secure about the data on their own computer, even if it is plugged into an ethernet jack.
Aren't those things logically inconsistent? If I feel secure about my data, it means I have a strong expectation that it won't be stolen.
But again, we're talking about different stuff. Obviously the people reading this, who know what "zero-day" means, won't be shocked by any individual attack. That doesn't make it a good idea to encourage end users to believe that all computers are inherently untrustworthy. Normatively speaking, we should set an expectation of high security in general, s.t. normal people can have faith that if they use recommended software, and follow the rules, they're safe at least from non-state attacks.
After all-- if computers are inherently untrustworthy, why should users bother to follow the rules? (From the sounds of it, that might be exactly what happened to Sony.)
Put on a networked machine, maybe. But despite the buzzword-lathered Cisco releases, there is such a thing as "internal networks" and heightened security zones. Your bank account being on a server in a datacenter doesn't make it "on the Internet", and there is absolutely a higher expectation of privacy when sending messages on a protected email exchange vs. IRC on Freenode.
The incident proves that Sony is incompetent and didn't prioritize cybersecurity the way an information-based company should.
The network is a lot more porous these days, but if your bold assertion were as serious as you think, then every bank in the world would be popped. Sony was the slowest deer.
So, don't give your employer your social security number or any other sensitive data? And still expect to have a job and get paid? That doesn't seem like a tenable plan.
Thank you for saying this. Events like this make it clear how much everyone exercises extreme wishful thinking in these situations. Nothing about these systems is secure, and no one should have that expectation.
There has to be a range of options. Strong privacy is simply expensive to do well and not everyone cares to do it well. We should expect strong privacy from institutions that need it (any government, including municipal, and organizations such as AT&T) and give out pseudonyms to others.
Your employer should be competent to keep your personal data secured as well, and they likely will have to do that by selecting a reputable provider for HR operations services.
Yes, we should "work towards" strong privacy by default, but since we'll never get there we need to accept alternative models.
You're basically saying that nothing on a networked computer should be considered private, including all the authentication details, which kind of makes banking (and bitcoin!) meaningless.
But by then Sony Pictures as an entity may no longer exist. From all of the emails being released ridiculing their own talent, to their employees having their privacy destroyed and financial accounts hacked, who could work for this company again?
The thing this attack does is raise the bar as to what to expect. The worst we had heard until now was credit cards being stolen for quick gains, maybe some business secrets being stolen. But in my mind it started with lulzsec a couple of years ago where the attitude was for anarchy and cyberwar. But if the new trend is for companies to get destroyed, then cybersecurity will go to the next level where every company has to assume if they get hacked, they will get destroyed, so it becomes probably even more critical than other business processes.