This article makes the point that law enforcement agencies take the stance that paying a ransom further encourages this behavior from hackers.
In the case of state or public institutions like this, would it be advisable for legislatures to make it illegal for state entities to pay ransoms, and then very publicly announce these laws? I.e. can/should we make credible, public commitments in advance to not pay ransom, or to remove that choice from the organization-level administrators? Would this make these organizations less appealing targets?
"Sorry, we are not authorized to pay you any ransom due to SB-XYZ. If you can get several hundred thousand signatures from CA residents to petition for a referendum to overturn this law, we may be able to pay you a ransom after ... well not the upcoming election but maybe the one after that."
Interestingly, this is pretty much the split we've seen regarding terrorist hostage-taking in North Africa. While European governments have generally paid ransoms for the return of their citizens, the US Government steadfastly refuses to pay.
In early years, this generally led to better outcomes for European citizens, but as time wore on, it's come to a point where the terrorists actively avoid kidnapping Americans and prefer Europeans. Assuming the these types of hacks are explicitly targeted, I imagine we'd see a similar dynamic play out.
I’ve traveled a lot in the Sahara and an old expression amongst the expats was “The French send troops, the Germans send money and the British send regrets”.
Pretty sure I heard on a NYT podcast that proxies are used for US citizens who are kidnapped. Specifically, a high-profile US citizen kidnapped by ISIS and they were returned via payment via a proxy.
Wealthy private citizens will pay, but the government says it will not. In the case of a private citizen paying a ransom for another you run into money laundering laws and trade restrictions.
This doesn't work in practice, companies that aren't allowed to pay those ransoms usually use proxies (some other company that doesn't have to follow those restrictions) that will pay the hackers
There are ways to prevent the "bags of money" from happening. The Foreign Corrupt Practices Act (FCPA) comprehensively prohibits even using the most obscure arrangements to pay bribes. Large institutions hire expensive lawyers to ensure their ongoing compliance with FCPA, because the penalties for failing to prevent your organization from paying bribes are extensive. You can't completely eliminate a practice through law, but you can come close, and FCPA has done more for this problem globally than nearly any other measure enacted by a government.
The Foreign Corrupt Practices Act (FCPA) comprehensively prohibits even using the most obscure arrangements to pay bribes
You can have all the laws you want in words on paper, but if they're not enforced, for all practical purposes, they don't exist.
The people who enforce the FCPA must be understaffed or undermotivated or underfunded because I've worked for several companies that regularly paid bribes as part of doing business.
One example: I worked for a large media company that would send TV crews to cover stories in Mexico on a fairly regular basis. Almost every time the crews tried to return to the United States, the Mexican border personnel would seize their very expensive gear. The only way to get it back was to pay a bribe.
This was so common that everyone was told to just mark it down on their expense reports as "Airport tax." I only found out about it when I started asking why I kept seeing "Airport tax" on expense reports for trips I knew were done in cars.
Your example would be a _very_ far stretch for FCPA.
The law is about bribes for "obtaining or retaining business". It's one thing if you were paying a bribe to say, a local minister to get exclusive access to some sort of scene...
But low-level crooks pretty much sticking you up and you try to buy your stuff back from them under the guise of "government business" is not the kind of thing FCPA is about. It's for concerted attempts to pay off foreign officials to strengthen your business.
Which surely still happen, but not in the manner you're describing. FCPA violations wouldn't be the sort of thing that "everyone" is told about.
IIRC the kind of phrasing used is “external security consultants”.
“We didn’t hand duffel bags of money to the perpetrator group’s courier, we hired a professional external individual security consultant to handle the situation”
News from a few months ago: You just had your servers hacked into and all your database are belong to them. The black hats demand X number of BitCoins as ransom, but you cannot pay because it violates certain laws. So you hire an intermediary who pays for you, thereby avoiding the legal problem.
By very loose analogy, either when playing chicken, or when you and a person walking towards you both repeatedly veer in the same direction to avoid collision, one tactic is to very conspicuously cover your eyes. The other person can then see that you will not re-correct based on their behavior. Though I know this option exists, I have never successfully used it. It's always difficult to truly intentionally commit to limit your options to respond to future circumstance.
I heard of this as a kid, something along the lines of 'when walking down a street make an effort to look forward, through people (and not at them)'.
Same concept applies, and in my experience it seems to work. Though this was before the era of phones (and people not looking where they're going regardless)
Pass the law to 1) forbid public entities from paying ransom; 2) stringent public timely (less than 24 hours) reporting incidents; 3) stringent public reporting on root cause analysis/resolution/future remediation.
If it is a legal requirement of my job to do the right thing, I'm gonna do the right thing.
Money laundering has the benefit of Federal law working to help the State laws. I think in an environment where there are 50 different legal regimes it's inevitable people will develop workarounds. You see legal arbitrage in every instance where legal differences exist between states. From corporate law to family law. I don't know why this would be any different.
If you want to stop the hackers, make it a Federal crime to pay anyone. In that environment, there would be no circumventing the restriction at all.
But it would remove public institutions from the target list. Also, in the case of private institutions, if it were a criminal offense to use such a proxy, an investigator could discover this. The threat of prison for any officer of a corporation who arranged such a payment would be a powerful deterrent.
Not that it undermines your overall point, but it might prove to disincentivize attacking smaller companies that aren't in as strong a position to use proxies -- which I would still count as a win.
But at least it's difficult and illegal. It makes them less of a target for hackers since they're less likely to pay and it places liability on anyone who tries to work around the law.
Banning ransom payments won't magically fix the underlying vulnerabilities that allow these gangs to deploy their ransomware.
If ransoms weren't being paid, criminals would find other ways to monetize the data. "Honest" ransomware is actually good for the public in the sense that should the ransom be paid, the data is indeed destroyed by the gang. Make ransoms impossible and they will start selling the data or monetizing it in other ways (identity theft, card fraud, etc), at the expense of the public.
Given that we can't eradicate this kind of crime entirely by improving security, I think ransomware is the least bad option in the sense that it punishes the offending company while minimizing the risk of the data being leaked which would hurt the data subjects themselves (the public).
> Given that we can't eradicate this kind of crime entirely by improving security, I think ransomware is the least bad option in the sense that it punishes the offending company while minimizing the risk of the data being leaked which would hurt the data subjects themselves (the public).
There is nothing to guarantee that attackers will destroy the data and not further exploit it even if you pay them. Improved security isn't going to fix the problem, but we can make it less profitable and make that profit more difficult. If our policy is to pay we're just making it highly profitable with very little effort on the part of the attackers. If we refuse to pay, they will have to pour over our data looking for what may or may not be valuable to anyone, spend time searching for those people who might pay them for it, and then spend time convincing them to pay enough to justify their time/efforts.
We should be refusing to pay and making sure we've got backups of our own stuff so that we'll never have to.
> There is nothing to guarantee that attackers will destroy the data and not further exploit it even if you pay them
Their business model relies on them being honest. If they don't follow through on their promise of destroying the data they'll kill the ransomware market entirely. So far, I haven't heard of major instances where ransomware gangs didn't fulfil their part of the bargain.
> Their business model relies on them being honest.
Truthful at least, "honest" isn't a word I'd use for these types.
> So far, I haven't heard of major instances where ransomware gangs didn't fulfil their part of the bargain.
The point is that you wouldn't. They can't publish the data or publicize its sale, but (if they were willing to invest the time) they could still sell it privately, or use it themselves to further attack/exploit you without you ever being able to trace anything back to them directly. They could wait months or years if they wanted and still find value in it (bait for use in spear-phishing for example).
and say they do make it illegal for state entities to pay ransoms... then what? what is going to happen when a ransom attack does happen? they contact the fbi... great... now what? how do they get their data back? what obligation does the fbi have to tracking down the gang and getting the data back? what's the time line?
see... the issue i see with making it illegal for state entities to pay ransoms is that you tie the hands of the victim without any guarantees that law enforcement will help and help in a timely manner. i see this as a lose, lose situation.
The point is that there's no incentive for hackers to target state entities.
Hackers can target state entities for other reasons, but no rational hacker would do it for the ransom, since there won't be any ransom paid.
The FBI can simply say "We'll never catch the hackers, but if you pay them you'll go to jail". It accomplishes the same goal of reducing the reward for hacking to zero.
It seems this law is intended to benefit those with the most resources to implement the best security, leaving smaller businesses to pretty much pound sand.
We have arrived at why "a pretty basic backup" is no longer feasible for...any business. A hard sell for a four person business with no dedicated IT team.
Sure, but to a general at HQ, 1 dead soldier is better than 10. The policy is devastating to that 1 soldier (and family), but that's not enough reason to adopt an opposing policy that would save the 1 but kill the 10.
Similarly, I can appreciate the logic in making American companies less likely to be targeted by ransom hackers, even if it means some companies are hit harder in the short term.
You've made the implicit assumption that it is acceptable and desirable for the government to sacrifice some companies to save some others. I'm not so sure that's the government's business, and it sounds a lot like a taking to me. Perhaps it is acceptable in the era of Kelo.
OK, fair, although even with the example public goods listed in that Wikipedia page their provision in reality still does end up supporting certain companies and harming others - e.g. if I'm in the business of selling air purifiers, government efforts to reduce air pollution are going to negatively impact my sales.
I totally agree that government policy can shape the market, and my issue is not at all with that happening as a by-product of public goods, but only when it is a direct and deliberate action.
Got it. I think where I lost you was in your use of "picking companies out" - I didn't realize that you meant only intentionally as opposed to incidentally.
Right, today the logic is: If the risk-adjusted cost of a ransom is less than the cost of implementing proper backups, then it makes sense to just not do backups. If paying ransom was illegal, maybe they'd actually invest in those backups.
>because they care about their data and/or the privacy of their data.
>Making it illegal for them to pay just means that they can't look after that interest. Why would that be a good thing to do?
You only have the criminal's word to stand on when they claim to delete data. It's far too easy to simply hang on the to troves of collected data and wait for a rainy day.
The point is that ramsomware is only written because hackers can get high payouts. If the penalty for paying a ransom is higher than the costs of not paying the ransom (and losing the associated data), then no one pays the ransom, and if no one pays the ransom, no one makes ransomware (or at least no one targets institutions who can't pay ransoms).
I believe this is discussed in Schelling's book "Strategy of Conflict", which I've never read but has been much discussed online[1]. Indeed the article I've linked specifically mentions this case.
It's a public institution. It's not "their" data. It's their shareholder's data-- the public.
Whether or not trusting the judgement of administrators over the judgement of law enforcement is the best way to handle these situations is an open question.
I'm not sure I trust public university administrators to do much beyond stimulate the local construction economy and wider investment banking industry.
The problem with that scenario is that it's probably the same public legislatures that have failed to fund adequate information security for these public institutions. If such a law was paired with appropriate funding then sure, go ahead. If not then what you'll get is more public institutions getting hacked and officially prevented from paying the ransom to get files back.
Consider fake ransomware that doesn't decrypt even after payment is made.
Would it be moral/societally good to write and distribute this software? If it became prevalent enough, it would damage the ransomware model as people would be much less likely to pay if they thought there was a significant chance of payment not fixing their issue.
"Sorry, we are not authorized to pay you any ransom". I think that's a much harder pill for the victim to swallow than the hacker, especially if they otherwise would pay, because they need to get the data back.
One thing that might work is if white hat hackers outnumber the black hat hackers and create ransomware that doesn't have a decrypt option. At a certain point, people will stop paying the ransom.
Another option is: forbid bitcoin and other cryptocurrencies.
And then the “committee” meets and they take a majority decision to pay with a secret vote, and another committee makes the actual payment (by majority).
Who do you prosecute?
Would you close the University to huge harm to the students and researchers?
What you're describing is conspiracy to commit embezzlement; everyone who participated in that conspiracy gets a tenured position at Folsom. And why would you close the university? Are you seriously claiming that everyone in management is determined to do everything possible to hand money to criminals?
Really? In a secret voting? Who is to blame? Are you sure people are really at their best when their positions are in play and they can hide behind a committee and the solution is “grey”?
What if you had a secret vote on whether to murder someone? Who would you blame?
You would charge the people who recorded the outcome of the vote and did the killing with murder, and you would charge everyone who participated in the vote while knowing one of the outcomes was illegal with conspiracy and failure to report.
Email your congressman/woman: Paying extortion fees cybercriminals should be illegal - and severely so. With the stroke of a pen, a law making the practice illegal would immediately allow every institution and corporation in America to say, "We cannot pay your fee no matter how hard you press us, as we would face jail time if we did so."
Would gangs still try to extort people? Of course. But large institutions would no longer be a target, because their internal controls would prevent the payment of extortion fees. Small organizations might still pay fees, but the potential take for gangs would be reduced remarkably.
I think this is an interesting direction, but I wonder is there a successful precedent for something like this? Perhaps some government somewhere in the world has already tried this? And if not for hacking, data theft/encryption, maybe there are analogues like (and this is a stretch) large organizations that have managed to continue operating in regions where kidnapping for ransom is common?
There's quite a lot of literature on the US and UK's no-concessions policies on kidnapping. Here's one example [1]. A few quotes:
> Despite the U.S. no-concessions policy, U.S. citizens continue to top the list of nationalities kidnapped by terrorists. This may be explained by the prominent role and perceived influence of the United States and the ubiquity of U.S. citizens around the world. Nationals of the United Kingdom, which also has a no-concessions policy, are second on the list.
> While a no-concessions policy may not deter kidnappings, it may affect the treatment of hostages in captivity and determine their ultimate fate. According to a 2015 study published by West Point, Americans held hostage by jihadist groups are nearly four times as likely to be murdered as other Western hostages (Loertscher and Milton, 2015). The no-concessions policy may be only part of the reason. Another factor would be the jihadists’ intense hostility toward the United States.
> While the U.S. no-concessions policy has not deterred
kidnappings, there is some evidence that political concessions and ransom payments appear to encourage further kidnappings and escalating demands.
> And although it did not produce any demonstrable decline
in kidnappings of U.S. citizens, a 2016 study published in the European Journal of Political Economy argues that, without the no-concessions policy, there would have been even more kidnappings of U.S. nationals (Brandt, George, and Sandler, 2016).
My take: Arguably, part of the reason the policy has not been successful in preventing kidnappings is that most of Europe does pay ransoms, and Europeans versus Americans are not always easily distinguishable. Even if the policy hasn't directly stopped kidnappings, it probably has stopped them indirectly, by avoiding funding kidnapping organizations. Europe has spent hundreds of millions of dollars in ransoms to terrorist organization, and Qatar allegedly paid close to a billion dollars in ransom. This has to fund further efforts.
Additional literature on the topic[0]. The finding is that any payment at all is sufficient for the operation to continue. This makes sense for ransomware was well since the marginal cost of hacking additional targets is effectively zero.
The major ransomware operations are targeted and the hackers do research the victims. They use spear phishing, so they need to know their victim. Unless the ban is universal and consistent so that hackers can modify their behaviour before they hack a target, there is no point in doing it. The US treasury announcement about not paying ransoms is just such a pointless terrible idea.
The problem is the intermediate timeline ... maybe some time window to limit the amount and then lower it slowly. This way the momentum from existing malware doesn't hurt one specific university or group.
Costs are zero. The ransomware platform runs as a server somewhere, and whether they get one ransom payment or 1000 their costs are the same. The gangs that do the hacking operate on commission, they attempt to phish multiple targets (hundreds, more?) at a time. Again, the cost of one payment is sufficient to justify the entire endeavour.
As long as even one victim will pay, then there is no incentive to stop hacking.
Costs are low but not zero. If they were zero then all exploitable ransomware opportunities would already have been exploited. When vulnerabilities are discovered the researchers have the option to publish for reputation, disclose for bounty, or market the vulnerability to criminals or intelligence organizations. If vulnerabilities are sufficiently hard to monetize for criminals then they'll be more likely to get routed somewhere else.
“We do not negotiate with terrorists” sounds like a great line but isn’t always practical in reality. If many people’s lives were on the line because of this hack, it would be difficult to justify not paying the fairly small ransom.
> And an anonymous tip-off enabled BBC News to follow the ransom negotiations in a live chat on the dark web.
So, that "anonymous tip-off" was obviously from the hackers, right? I guess the other option is a "whistleblower" at UCSF (would anyone else know about it?), but the hackers have a lot to benefit from everyone knowing about it, so next victim thinks "Gee, respected institutions like UCSF are willing to pay the ransom and didn't have the capability to recover otherwise, we should probably just pay the ransom too".
the whistleblower option is the most probable one, isn't it? Universities have to operate in a transparent manner and have no incentive to obscure facts here.
It doesn't sound like the university publicly and transparently revealed this -- the BBC wouldn't have to be cagey about how they listened in on the chat if that were true, they could just say "according to UCSF". But it wasn't that, it was an "anonymous tip-off".
So we already know the university was not being transparent and open about it. When I say "whistleblower", I mean someone who secretly gave the BBC the info and remains secret because they weren't supposed to and would be disciplined at work for it.
The university has PLENTY of incentive to obscure facts here, because the official line is that it's immoral to pay hackers like this (it encourages future hacks, law enforcement says not to do it), and because it reveals them as having made IT mistakes that led to a ransomware takeover where they decided their best/cheapest recovery option was to pay up (instead of restoring from backups etc). It does not make them look good to have paid up, that's plenty of incentive to not want the BBC to report it.
Also, having spent many years working for universities, I think it's kind of cute that you think they "have to operate in a transparent manner." Would that it were true.
They got hacked, it makes them look incompetent. People might call for some of the staff to be fired for not having security or backups.
Prospective students, research participants, etc. might hesitate to go to UCSF if their data's going to be exposed.
Also, they paid the ransom. Funding sources from alumni to state legislatures might hesitate to give more dollars, if the university's using its money to pay off extortionists as opposed to improving education or lowering tuition.
The university has lots of reasons to hide what happened.
Sounds like BBC literally eavesdropped on the chat, they were able to login to the chat. I'm doubting the public forum announcements give the public info to log into a chat where the hackers are negotiating with the target!
That would still require someone to tell the BBC. I doubt that the BBC routinely enumerates hidden services and then looks to enumerate all potential chat rooms and randomly stumbled upon a negotiation.
So the bad guys used a public ledger (Bitcoin) to get paid? Why aren't the hackers asking for cryptocurrencies using zero-knowledge proofs like ZCash or Monero? Bitcoin? What's their plan next?
I'm not saying you can't get away with this (there are coin "mixers" and decentralized exchanges) but still, this leaves lots of traces left and right.
For example we saw a lot of people getting busted recently while they thought they were smart using cryptocurrencies, including a money launderer ring... And they were using mixers, decentralized exchanges, people located overal several countries/continents and whatnots if I recall correctly. Yet: all busted.
For all we known in six months the headline could be: "Hackers who extorted 1.14M USD from UCSF arrested by Interpol"
Besides that: what happened to offline backups? How exactly are hackers coming for cloned, unplugged, HDDs/SSDs stored on shelves / bank safes? (I know several companies doing just that as offline backups)
I hope this serves as a wake up call to companies/institutions either not doing backup properly or outsourcing to incompetent companies not doing backups properly (the latter being not really excusable).
> So the bad guys used a public ledger (Bitcoin) to get paid? Why aren't the hackers asking for cryptocurrencies using zero-knowledge proofs like ZCash or Monero? Bitcoin? What's their plan next?
I imagine they chose Bitcoin because it's the most liquid - they can just tell their victims to go to Coinbase or any cryptocurrency exchange to acquire it. If they ask for Zcash, the options are a lot more slim.
Indeed, I imagine a service that accepts bitcoin and provides you monero would instantly anonymize you. To outside observers it would look like you sent the BTC to an exchange and they'd continue following them to innocent wallets while you've hopped off the network entirely.
> Besides that: what happened to offline backups? How exactly are hackers coming for cloned, unplugged, HDDs/SSDs stored on shelves / bank safes? (I know several companies doing just that as offline backups)
I've heard, though I have no way to verify this, that some ransomware gets installed and just writes copies of itself for a while before really activating. The copies get backed up, and if you restore from backup you restore them and they get activated.
Filesystems that make this possible are the real crime against sanity. Most of the data would be stored on network shares, and the ransomware pulls the files, encrypts them, stores them back to the network share overwriting the original copy. Madness. Yes disk space isn't cheap, until you see the alternative.
We have a basic network filestore at Fastmail, it's not even a key part of our offering, but it stores up to 30 old copies and if you keep overwriting it does exponential backoff so you have the oldest copy in the past 2 weeks, plus one from a week ago, plus one from 3 days ago, etc up until a bunch of very recent copies. Ransomware would have to be running for 2 weeks to wipe out all the original files - and during that time the massive increase in disk usage would alert operations to something going on!
Likewise our email server software does integrity checks during replication between machines and won't perma-delete anything for a week after it gets expunged - and message content is immutable after writing, so changing anything is creating a new record and expunging the old one.
It costs extra space - but being safe against a client virus like this encrypting all the data on network shares isn't rocket science, and the network filesystem vendors who don't default to data safety are as much to blame as anybody for this still being a problem in $CURRENT_YEAR.
It really bugs me when I hear of institutions paying these ransoms.
Regardless of the damage, I'd just take the bullet, fix my security, and not pay. Be consistent in this, and keep it up for a while. Long term: no more extortion for anyone.
It's irrational to "bite the bullet" if the damage is significantly greater than the ransom.
It's not irrational. It's called doing the right thing.
Sure, it's better in the long term, but not for the person/organization being ransomed.
That's called being selfish. One would expect an institution like UCSF to act for the benefit of all of society and not like a six-year-old grabbing all the Easter eggs at the hunt and saying, "I got mine!"
We can't survive in a society where min/maxing benefit is the sole form by which we determine whether something is the correct action or not.
I am sure you would agree that gender-based abortion, deforestation, and infinite copyright periods could be seen as "rational" to people in certain societies and certain economic situations. It doesn't mean that we should let such actions go without comment.
I think this assumes an organization has proper data backup strategies in place. If you have daily snapshots and full weekly backups, then ransomware should just be a nuisance and cause some people to work weekends / after hours.
Enterprises generally go too far in the direction of restricting data access and copies of data, making themselves more fragile. They should outsource custodianship of data or do like what aws etc do, proper backups and fail tolerance.
Hi, I'm not sure if my comment will be read since there are a lot of them already, but what would the best move be for a medium-sized company in this situation?
Hypothetically, the fees won't be as astronomical like in UCSF's case but the importance of the data being held in ransom will still be the same. Should they take the risk of getting their financial/healthcare/IT data uploaded to the public if they don't pay the fee?
And what about backups ? Is it cheaper to pay ransom then reinstall and copy data ? And then cut off f* internet until some security is in place ??
If that ransomware uses something like flash for persistence why not ask some jury to enforce hardware manufacturers to stop enabling worse and worse viruses ? Floppies, cd autoplay, usb, firewire, thunderbolt, 5G networking - everything exploitable right from the factory.
Attacks like this make me think there's a real ($1+ billion opportunity) business in making an tech-first insurance company for security incidents.
Write insurance policies to major companies. But as a pre-condition for getting under-written you have to submit to periodic security review by legit security pros. Failure to adhere to security recommendations means your policy gets dropped.
That insurance already exists. Periodic review by security pros is pretty much worthless, however. Nearly every company that has been hit had review by security pros, and many had compliance certifications.
What exactly are these security "pros" doing? I just don't understand how the ransomware guys could destroy the snapshot backup copies of the database in my desk drawer. Why don't companies with this much data to protect have air-bridged offsite backups?
How good are these reviews really? I'm thinking of top tier security pros on the calibre of Project Zero/Google. Doubt most hacks would have gotten past a thorough audit by those folks.
No material difference from a defense perspective. Offensive prowess does not result in defensive prowess. Just look at the Android bug bounty program [1], only $250K for an remote kernel arbitrary code execution, or Apple [2], $250K for a one-click kernel arbitrary code execution. To be fair though, that is on the high end of security, you could probably totally compromise any Fortune 500 company for less than that. And no, I am not joking or exaggerating that is actually a serious statement.
Really all these audits do is validate your security. If they find something at a price point then you are probably vulnerable at that price point. Think of it like a live-fire test of a bulletproof vest against a gun. If a bullet goes through then you probably can not protect against that. If it does not go through, you still can not be certain that it actually does provide comprehensive defense against that gun and bullet, but it is at least not totally ineffective. In the current state of the industry, any competent audit will find multiple critical vulnerabilities at these price points. It is like shooting an airsoft pellet at a "bulletproof vest" and seeing it pierce through. It is so fundamentally flawed that testing against a real gun (better offensive specialist) is kind of meaningless since to actually solve the problems you already need to completely redesign everything. Unfortunately, most companies who get such audits done think the takeaway is that the places the airsoft pellet went through must be the only problems, so if they just patch them up then everything else must be good because nothing pierced those other pieces instead of realizing that observable quality defects in one place probably means there are many unobserved quality defects in other places.
This is the point. Any review process that's sufficiently mechanical to be duplicated at scale becomes a box-[ticking|checking] exercise with little actual value. E.g. you can verify that backups are made and can be restored but how do you find mission critical data that isn't subject to backup?
I think those already exist. I’ve heard claims of insurance companies refusing to pay out for ransomware because they should’ve had backups (they’ll pay for recovery from backups, but not ransom).
It's a great idea, iff you can accurately price security risks.
Security is an org problem as much as a tech problem. Trying to estimate likely security risks caused by orgs is... complicated. You would blow your margin in assessment costs alone.
Besides, can you imagine a board asking the CEO why they're buying insurance with the infosec budget instead of, y'know, ensuring infosec?
Insurance puts skin in the game for the insurance/security company. If a security company audits my system and gives the greenlight, they should be willing to put money on the line to defend their work.
These are acts of war by foreign entities against US citizens, hospitals, and governments. The lack of military response is dumbfounding and unacceptable.
The NSA is recording every byte of data crossing our borders, and also much internal traffic, and they are unable or unwilling to track down these perpetrators?
A more capitalist solution to the random ware problem could be ransomware insurance ideally mandated. You get hit and the company pays. But your premium rises the next time and till forever. You can get premium incentives to do audits and update software.
Lower premiums show up in the balance sheet as profit and therefore there is immediate incentive to act on security issues. The insurance company has enough incentive to track the victim that some action might get taken.
Pretty remarkable that this data was worth at least a million dollars to UCSF, but it apparently wasn't worth paying for backups, or hiring IT staff who aren't idiots.
I hate comments like this. It seems quite prevalent in the software dev field to constantly shit on other developers while having 0 information about what the source of the issue was.
They outsourced some IT staff. Reports are that this attack hit the epidemiology department, which lists a 10-person IT staff: https://epibiostat.ucsf.edu/our-team
I don't think we can assume that the IT outsourcing directly affected their vulnerability to this attack.
Probably should've been directed at the mangers/department head (not devs), but not having backups is most definitely not professional.
EDIT: The CIO or whatever the title is makes $460K per year so they should for sure know to have and be responsible for proper backup/restore functionality.
Does append only work if you have access to the raw disk bytes? Sure, the file system could enforce creation and appending only, but I can easily counter that with:
dd if=/dev/random of=/dev/sda
(Using /dev/random to give the illusion of encryption)
Yes. You would have backup agents that authenticate to a backup server. The server would only allow a specific method of sending data and the backup server would have policies about backup anti-tampering and retention. All workstations and live servers should be considered ephemeral and disposable.
Specifically for an institution like a medical facility or financial institutions, there are hardened appliances; sometimes referred to as vaulting appliances, that enforce anti-tampering to the point that the system administrators can't even delete data. You set a policy that requires multiple specific people using MFA to authenticate and authorize the deletion transaction. These are not cheap, but it's a lot cheaper than paying out a ransom and the down-time of rebuilding everything and the loss of reputation and loss of trust by board members and investors. These appliances have the bonus of enforcing many of your audit requirements around data retention and destruction.
To your example though, yes, it's not fun to manage fleet-wide, but you can boot up both Windows and Linux into ram and have network filesystem overlays that patient data could be written to. The SAN/NAS/Ceph clusters can then do backups locally and have anti-tampering in place. This is non trivial to set up correctly. That would be more resilient than depending on backups, but is much more work up front. For Windows, look into Windows 10 LTSC [1]. It can operate in a Kiosk mode and boot into memory or have hardened security options to minimize attack surface. Most Linux distributions can do this as well. Ceph can do both transport and filesystem encryption now. I will leave out the Linux examples as I doubt this is where these institutions are getting into trouble.
In my experience EDU IT tends to be an extremely small staff with a very heavy amount of duties and overtime. It frankly surprises me this doesn't happen more often.
It's also often quite distributed, which many small IT groups of 1-5 staff members that may or may not coordinate with each other or with the central IT group(s).
I did a contract programming gig for UCSF Med in 2018-2019 and they had a highly competent, mercilessly detail-oriented internal IT team trying to find HIPAA/security holes in my app on the reg. I actually left the project because this level of security wasn't in the original scope of work. Surprised if they weren't applying similar standards to their internal data and backup strategy.
It's hard to empathize with a corrupt entity that makes billions a year from swindling students and patients, all while maintaining a non-profit status. Par for the course with universities and large hospitals. Profit focused corruption leads them to not paying a good IT team.
In the case of state or public institutions like this, would it be advisable for legislatures to make it illegal for state entities to pay ransoms, and then very publicly announce these laws? I.e. can/should we make credible, public commitments in advance to not pay ransom, or to remove that choice from the organization-level administrators? Would this make these organizations less appealing targets?
"Sorry, we are not authorized to pay you any ransom due to SB-XYZ. If you can get several hundred thousand signatures from CA residents to petition for a referendum to overturn this law, we may be able to pay you a ransom after ... well not the upcoming election but maybe the one after that."