It's funny how these articles are always so quick to reassure "no payment details have been compromised", as if that is a significant concern. How did we get to the point where the private data that can be rendered obsolete with one phone call is treated much more securely than private data that will follow us for the rest of our lives?
We need a lobbying group that will strong-arm lawmakers into crafting regulation so that our personal information will be treated with at least as much care as ephemeral credit card numbers.
Seems like the payment data is just handled by the credit card companies, so it's a matter of targeting random businesses vs Visa....
So, seems like you're ultimately advocating for all data to be handled by credit card companies, or similarly hardened targets. In the end this may not gain much; a company with compromised api access is still a good target. Which helps demonstrate another difference: Companies need ongoing access to their data, where payments are 'fire and forget' and thus are easier to protect.
What always amazes me is that credit card data is almost always safe since VISA/MasterCard and others have very stringent security requirements. (PCI DSS)
There are some regulations regarding medical data (Eg, HIPAA) but security seems like an afterthought in most hospitals at best.
Securing payments is much simpler than securing medical data in many ways because payment processors are centralized entities with established protocols for data transmission, where communication is largely many (vendors) to one or few (the processors), and where only one type of data is being moved. Health care organizations are HIGHLY decentralized entities where authentication is extremely difficult; where orgs employ many different protocols and software stacks; where many different types of data need to move freely between many orgs, with various levels of sophistication, in many different directions (patient to provider, provider to patient, provider to provider, provider to payer, payer to provider, patient to payer, payer to payer, provider to regulator, provider to researcher, provider to vendor, etc), with few established standards for how that is done (paper, phone, email, web application, fax, API, snail mail, CD, hard drive, USB, etc), with many people having access; and where organizations need to be porous, with high turnover by design. It should also be realized that a failure to access payment data or process a payment results in lost business and headaches. A failure to access medical data may kill someone, so tradeoffs between confidentiality and availability are much more nuanced.
In the medical world you have standards (HL7, DICOM, XDS) which are all about throwing large amounts of data around hospital networks (and in the case of XDS - outside). It's a castle with moat model of security - everything within the network is trusted and they focus on keeping the bad guys out.
Obviously that's a horrible strategy and it delivers the expected results..
Also your standards aren't entirely useful if you lack the inter-connectivity to employ them, the UIDs to be able to properly specify the data you are requesting, or restrictions on what data you are allowed to put within fields of the standardized data structures to make it easy to interpret by a program (believe it or not, with some standards this can also be a problem).
Good comment, especially WRT trade off between confidentiality and availability. Nonetheless, I do feel that many of these items (few standards, little interchange, often old tech, data decentralization) are primarily problems because the vendors and hospitals don’t really have strong incentive to solve them. I do appreciate that the problem is non-trivial, but I don’t think that the problem would be unsolvable should the appropriate incentives be put into place.
I've worked as a security consultant for healthcare companies for years. The HIPAA Security Rule is a joke. The HIPAA Security Rule requirements are extremely basic things like "users must have their own login username rather than sharing an account" or "data should be encrypted where appropriate" (and it's left up to the company to decide where they think is "appropriate". There's also zero requirements around the type of encryption or implementation around it.. you could use a Caesar cipher and probably pass a HIPAA audit).
Yes, as the other commenter mentioned, hospitals do "take it seriously" in the sense that they put a lot of importance on passing HIPAA audits... but passing a HIPAA security audit is a checkbox exercise for security controls that are a decade+ outdated. It means absolutely nothing about an organization's actual security maturity.
Can confirm. Even the getting the ISO27001 certification is mostly about checking boxes. In many cases an ISO27001 item can be satisfied by picking one of several ways that standard gives to claim it's not relevant.
You don’t even have to check boxes for ISO 27001 these days. All you need to do is pay “consultants” in certain foreign countries about $5k and you magically receive your certification.
Can't speak directly to this, but just to add to the sentiment here - no breach was ever prevented through the application of pen to paper in the shape of a check mark.
If an organisation has the money to spend on doing ISO or something else, it should put the money towards someone who actually has some good skills and knowledge in security and can advise them.
An organisation that recognises the business value in being secure (less risk of fines, reputational damage, more ability to win work with lucrative large organizations) is already in a good place as they've crossed the first hurdle!
The issue with certificates like ISO (and indeed any other kind of kitemark for security) in my view is that it presents the opinion of one (probably inexperienced and cheap) junior person as to whether what you presented them with on the day sounded to comply with a rule. No focus on whether the mitigation is effective. No focus on whether it's relevant or appropriate. No focus on whether it's adequate, or how it sits in relation to the capabilities of a motivated adversary.
A decent understanding of your threat model, your exposure, and how you plan to invest to improve would be far more valuable. Second to this is to then avoid buying snake-oil vendor security products that aren't effective - many of the big organisations breached through solarwinds offer incredibly expensive "AI"-based cyber tools as those are in vogue. Yet all were compromised by a silly supply chain breach from some proprietary DLL a third party vendor was shipping into organisations, which was blindly trusted.
Getting a basic understanding of the old fashioned principles of security and having someone help you take technical measures will be a load more effective than producing paperwork to keep a junior auditor happy.
> If an organisation has the money to spend on doing ISO or something else, it should put the money towards someone who actually has some good skills and knowledge in security and can advise them.
These certifications are all about shifting blame and minimizing liability. They come up with these stupid standards that do nothing, certify their own compliance and then when they get owned they say "don't look at us we followed established best practice".
They don't actually care about actual security. To anyone who actually cares, the right way to do things will be painfully obvious. Instead we get people who scrutinize the standards in order to find the easiest, cheapest way to fulfill the requirements. Backup? Just copy the MySQL directory! Hashes? MD5 will do. Why encrypt data at all if it's only being transmitted on a local network? And so on...
There are legitimate concerns about the usability of secure medical software but I don't think that excuses some of the absurdities I've seen...
It’s even worse than that. Many of the compliance standards do require best practices, in most circumstances. For example, CSF, RMF, NIST 800-53 or 171 are all relatively sound from a technical perspective (e.g., data at rest must be encrypted, encryption must be FIPS-validated encryption, etc.).
Unfortunately with ISO 27001, NIST 800-171, etc., is that people see having written down policies as evidence of implementation of proper controls. If you have a policy that says you use role based access control, you have to actually do it. If you ha be a procedure that says you backup sensitive data to X alternate location and perform failover tests annually, you have to actually do it.
It’s sad, but 85% of compliance assessors I have worked with essentially look for “do you have a policy? Does that policy say the things the standard says it should? You’re compliant!”.
I blame the companies in part, but I also blame the people who are trusted to objectively and competently evaluate the system’s level of compliance. The standards and assessing compliance to them is great in theory, but in practice, people are...people.
The few areas I will say have succeeded are NIST 800-53/FISMA and FedRAMP. They are not perfect (see: SolarWinds), but the bar for obtaining an ATO and/or FedRAMP accreditation is relatively high.
I agree with largely everything which has been said, but have fallen back on ‘whelp, having even a shitty standard is better than nothing at all, because at least then people have anitivius’. I also would prefer to see an org with whitelisting and ASR enforced office over AV (if they had to be mutually exclusive), but alas we as an industry tell people to waste their time with things that don’t matter all to pass some checkbox security test to at least obtain some baseline. Some of this is probably greed/scams in the case of self-appointed standards (CREST for instance) where others legitimately are at least trying to solve the problems.
How do we solve these issues without upskilling a bunch of people who don’t know/care about security? Is there even a solution, or are we just bound to hit some mr robot-esk post apocalyptic scenario before people get their shit together?
Hard to say. In some industries I think it is unnecessary and if they face a breach, sucks to be them, but not a whole lot is likely to be lost/damaged.
In industries where it is a necessity (e.g., government, payment processors, healthcare organizations, etc.), I think there are several things that could encourage adoption.
However, fear of distant future possible outcomes is probably one of the weakest human motivators.
If I could advocate for an approach, it would be through tax incentives, government underwritten insurance that requires adherence and practice of security controls, etc.
My thought is, we can very likely encourage Cybersecurity practices using the same tools we use to say, stimulate the economy (e.g., providing liquidity to housing markets, tax rebates for first time homebuyers, etc.) or adoption of lower emission energy technologies (e.g., tax rebates on purchases of electric vehicles).
Unfortunately, people and government have not seemed to want to make the investment necessary to implement methods I’ve suggested above. Which is bizarre, because some of what we have lost and continue to lose is priceless (e.g., OPM government employee records, IP related to military technologies, etc.).
There are companies in certain developing countries who will skirt the rules on providing consulting and auditing/monitoring to the same company.
They do it for just about any ISO cert, not just ISO 27001.
It is a bit of a dirty secret with companies who work in heavily regulated industries. The companies in question will go through the motions, but make no mistake, you pay for the cert.
Sorry for being vague, just not looking to publicly out anyone. If you Google around, I’m sure you can find more about what I am talking about/much of the controversy around many of the ISO governing bodies.
> There are companies in certain developing countries who will skirt the rules on providing consulting and auditing/monitoring to the same company.
Welcome to my live :) We 'hired' these experts since they came up with the lowest price offer for our certification. I have been through many certifications in the past, this was one was the most... shameful.
Pathetic grasp of English, IT in general and security controls specifically. We passed that in absolutely zero time, if you exclude the time spend having lunch and 'discussions' about the interpretation of the requirements.
This was PCI BTW.
Next was the local healthcare certification, done by an international auditing firm. Possibly even worst. Total paper tiger exercise. Total lack of understanding of current security standards. Nice ties & suits though and even better lunches to discuss (you guessed it) the interpretation of the requirements.
I get why these guys get the jobs: they know the right people and look the part. But boy, would it not be nice if experts could do these jobs.
All a joke, we tell the auditors what they want to hear, we provide documents to prove the processses are implemented as they should but then in practice nothing if followed but they dont get to know that.
My favorite anecdote from working in health care was a VP asking if we could use—-trust OpenSSL, because it wasn’t version 1.0, and it was 0.9.8, and instead should we license an encryption product.
Was it a VP of a tech department? If not, it's a very valid question. Either way, it sounds like a good approach to ask the question to someone else who's closer to practical implementation.
He was our CIO. This was ~2011. Effectively, he was getting steak dinners from vendors, and using that to make decisions. These questions were either from ignorance or worse, malice, to force us to vendors.
This (public) medical company went out of business roughly a year after.
Especially considering some programs only get a minor version
increment about once a year(FreeCad 0.18) while projects like
Firefox or OpenSuse skipped tens of major version numbers within
a short period.
I've done work in the medical industry, both for hospitals and private software companies developing medical software. In my experience; security, stability and compliance with HIPAA and other regulations are taken very seriously.
In my experience HIPAA is taken very seriously in the sense that people are willing to have meetings about HIPAA, with furrowed brows and serious expressions and a lot of signatures. Are the actual end-products more secure? No probably not. Of course this probably varies drastically from place to place.
Like you said, it may vary place to place, but you are definitely more secure when complying with HIPAA than without. The very act of discussing security within an organization in a structured way is a good start.
On the parent comment I am not saying that hospitals aren't HIPPA compliant but rather that the security expectations of credit card data are higher than medical data.
That's UK, no HIPPA per se. Funny enough, the infamous GDPR applies and data leaks are quite punishable.
The Hospital Group is in a quite bad position: 1) the blackmail, in no definition that's ransom. 2) The data leak has to be reported and potentially they will get fined by the state.
As for taking regulation seriously, I guess it does depend on the industry. Where I work GDPR and regulatory breaches are treated more seriously than downtime.
>There are some regulations regarding medical data (Eg, HIPAA) but security seems like an afterthought in most hospitals at best.
I worked as a developer for healthcare/hospital websites and the company I worked for took it far more seriously than the hospitals did. We had to babysit them constantly on potential violations and actual violations. The average hospital has at least a dozen different HIPAA violations each and every day because in the end convenience almost always trumps security when a person is stressed and busy. And those were just the violations I was privy to as their web developer - undoubtingly there were even more that I couldn't have been aware of.
When we dropped support for IE8-10 it was a major issue at nearly every hospital we worked with and we couldn't convince them to finally upgrade until lawsuits started happening.
Visa/MC have several advantages over a federal regulator:
- they have a more regular feedback mechanism (if someone's db gets leaked, they get more fraudulent charges from that company's customers, so they can figure out whose security was bad, and this happens often)
- they have a more credible enforcement mechanism: they can and will turn off your ability to process CC transactions, while a federal regulator in many cases faces a lot of political pushback if they try to actually shut down a medical facility
- the intended (and actual) purpose of PCI was to reduce the number of operators actually storing CC data; if a federal regulator tried to make things onerous enough to force most medical facilities to not store medical data, there would be enormous pushback and they would be forced by Congress to back down
Let's not go letting credit card processors off the hook, this was barely a month ago. Part of our security team is essentially full-time on dealing with the consequences of actors using stolen credit cards.
> security researchers from Website Planet found that Cloud Hospitality stored information from more than 10 million travelers on an unsecured database with no password protection.
That will be taken by credit card companies as gross negligence and breach of contract (they include PCI DSS compliance on all contracts and a requirement that they do the same for anyone that processes credit card data for them) plus anyone going the legal route (and indeed there are reports of a class action that mention PCI DSS compliance explicitly)
My original comment was more in regards the care and security that is expected.
PCI is far from good requirements. Some of their controls make sense, some made sense for corps in the 90s and some are completely opposite to what you should do. It's good that they at least force the company to think of the requirements and dedicate some time to it. But I really wouldn't put PCI DSS as a good example, or an "almost always safe" example.
Not to disagree with you as I agree with the sentiment, but as a counter data point, I've seen a number of quite serious beaches in the past where PCI/DSS clearly wasn't being followed, as CVV2 was compromised along with payment details. Often the merchants were too naive to even understand what was going on.
Clearly having straightforward gateways to handle payments can help retailers and raise the bar, but I never cease to be amazed at how many sites run third party scripts on pages processing sensitive information! Bonus marks for using third parties that let other third parties place code on the page!
I think we have 2 orthogonal aspects here - the presence or absence of a straightforward commodity solution, versus the presence of clear security guidelines. The former seems to be what drives better practices, whereas the latter is more guidance people ignore, due to lack of personnel and skills.
There should be strict laws around collecting the data in the first place. Only minimal, absolutely needed data should be collected.
I remember shopping at a well known, large electronics store in NY. The cashier insisted on taking down my phone number and email. When I asked why, he said "if you decide to return the items". I told him I will produce the receipt, but this turned into an argument. I didn't want to waste other people's time over a 30$ purchase, so I just left without buying. This is just a small example of the abuse that we put up with everyday
One would think that if anyone had seen enough breasts, it would have been a plastic surgeon. Maybe they preserved these images for use in malpractice suits, but that's not a reason to keep the images online.
Yes, insurance, malpractice. I talked to a friend who is CTO in a private hospital that has a clinic for plastic surgeries (but mostly for non-vanity purposes aka burns, acid attacks, etc.). One main reason for the pre-during-post photos is for medical studies, progress monitoring, training, etc.
They keep these photos on a server on a separate network (and photos are not accesible to the typical workstation, but on locked-down ipads).
For the famous people, it's a 'nice' extortion, for some 'glossy magazine babe' who claims she is all natural, to come up with evidence of implants, etc. would be devastating to her number of followers, and thus sponsors/payments.
If the data is sensitive, keeping it on a locked down iPad is nowhere near enough to protect it. I can understand if they're using the images for machine learning or something, but if it's truly just to prevent future malpractice, data like this should only be kept on air-gapped systems or offline media.
That said, let's hope we're beyond the point of anyone caring about pictures of the human body or whether someone's had plastic surgery.
If I were threatened with something like this, which could be pretty embarrassing since I lost so much weight and have loose skin, I'd tell the hackers that I'll release even more if they leak mine and then attempt to find dirt on them.
Post-Solarwinds, hopefully people will wake up to your point - any online device using components and softwares from complex supply chains isn't good enough. Heck, Intel, Microsoft and Cisco were breached through that, and that covers a very significant portion of the supply chain of the devices and software people use today (though admittedly not for the one example of an iPad).
Even if they want to use them for ML, this shouldn't be reason to reduce the perceived sensitivity of the data to let them sit on an online device, as the harm hasn't reduced. Hopefully we'll see more threat models based on the impact of harm, not on the convenience to business.
In most decent frameworks (NIST, COBIT, PCI-DSS) the changing of default passwords, removal of (unecessary) default accounts, and similar controls is a MUST. The network admin who doesn't do that the minute they add a new device on their network should lose their job. The companies who have IT Sec, and IT Auditors who don't check for this should also lose their jobs (or they should all get educated and keep their jobs).
These are basic stuff, a newbie IT should know these things.
I will also assume that (large) organizations test the updates, and have an action plan in place (i.e. apply fix/patch/update XYZ, study what it does, read the documentation, make the future-state-config, deploy that config, validate the config). I know, simple words, we 'all' (in the profession) know this but when you need to patch x1000, and the boss is barking.......
The specific problem on network equipment (i.e. Cisco) is actually that these "default" accounts are really backdoors, since they are not exposed in a list of accounts in the UI or shell interface.
Therefore auditors will look and find nothing, but the accounts are buried there within the system if you know about them (i.e. by exploring a firmware dump and finding the password hash and reversing it).
If they are undocumented accounts (backdoors in the devious sense) then yes, we cannot do anything about it, just try to pentest the shit out of the equip, fuzz it, and pray to our god(s) of choice and pray we get lucky in these futile experiments.
If these are documented (e.g. IBM has these notorious RedBooks of 500-700-1000 pages) then one should spend the time to study before implementing, securing, auditing, and-other-verbs.
Again, the only 'excuse' I can accept (not really) is that "management" knows that the staff is not enough and they cut corners.. in which case you crucify the COO in your report, not the poor admin(s).
>some 'glossy magazine babe' who claims she is all natural, to come up with evidence of implants, etc. would be devastating to her number of followers, and thus sponsors/payments.
Being caught in a lie about your main product being fake is a win for the public in my book.
I mean they post page after page of the before and afters on their websites, which I've always found kind of disturbing. I'm sure the women agreed to it but still kind of odd.
"It's understood that many before and after pictures will not include the patients' faces."
What kind of pointless statement is this? What is "many"? And does that imply that "many," "most," or "only a few" pictures will include the patients' faces?
Photos of facial surgery are more likely to be identifiable (nose, cheeks, chin, lips, eyes/eyebrows) while photos of bodily surgery (breasts, arms, stomach, etc.) won’t include the patient’s face. Its probably up to the doctor’s photograph preference what types of facial photographs are identifiable and how close/far the zoom is when they take the picture.
Why does everybody keep data hanging around forever? It's easier. You don't have to think about it. Just keep kicking the files onto new media every few years / at a new server refresh.
I did some IT work for a plastic surgery practice in the US many years ago. I was adding some storage to an existing server. I was shocked to see that the practice was keeping all their before / after photos online going back years. Not encrypted. Hanging out in Windows file shares with lax permissions.
It certainly gave me pause.
Maybe some software providers in this space will think about handling this better.
It seems that Windows SMB file sharing is still the go-to universal storage system in a lot of organisations. I imagine because it's built in, easy to configure, and works out the box with the ability to map drives so it's transparent to users.
In contast, anything more secure tends to add inherent friction (since just blindly giving access due to some potentially replayed hash of a user's likely weak password isn't exactly going to be appropriate in a secure setup). And if it adds complexity, people still go for the old solution.
I hope they had backups - I've often caught out the same Windows SMB-using orgs out when checking their backups and discovering both of their weekly-cycled drives are empty and devoid of backups, despite having been diligently swapped out according to the schedule!
Why does a software engineer keep old git-repo branches around, including their history? The engineer can compare the before-and-after especially as they relate to experiments, successful, and failed approaches.
A plastic surgeon might want to look at before-and-after for a few of their "branches" (specific plastic surgeries or repeated applications of a technique). "When I did celebrity-A I notice they sag too much in location-X, whereas for celebrity-B where I changed the procedure location-X looks much better." "Celebrity-P has the same odd nose Celebrity-K had ... let me consult my notes and the before/after for Celebrity-K."
I didn’t let my plastic surgeon take before and after photos for this exact reason. I asked him whether it was necessary for the procedure and what they were used for and he couldn’t really give me an answer beyond it’s nice to be able to compare the finished product. So I told him when I came back in for my post-op I’d be more than happy to pull up a before picture on my phone for him to use to admire his work. I even let him take the “before” photo on my phone. I’m sure he thought I was a paranoid tinfoil hat type but he really didn’t seem to mind.
This is one of the rare pieces of advice few people get. You can tell the professional in the room "Your idea is stupid and I as a paying customer do not want that." It's amazing how many people concede to the request.
The only place I can't get away with it is a dentist. They love giving x-rays...because apparently that helps with scaling.
if its in the US, its probably that once you have the machine, and the tech, the cost of time and materials is probably less than 5 dollars, but they can bill the insurance $50-100. So they do it as often as they can, which under most insurance is covered one or twice a year. I could imagine it helping plan treatment for cavities, but for scaling? I doubt it...
They can feel cavities with their scaling tools. That's like half the reason why they train in school. They don't need a damn x-ray to do that. And they all say "We will not continue unless we do an x-ray" which just floors me. How is shooting x-rays in my skull even remotely healthy for my brain? I get we can tolerate x-rays, but jesus every damn time I switch dental practices or minimum once per year?
Its like when I went for a broken tooth... The dentist insisted on a panoramic xray of my head/jaw... Then he came back and said: yep you have a broken/chipped tooth... And the funny thing is that he protected my chest from the xray with a lead cover but not my brain... And the tooth broke because it was repaired from a previous cavity
Because they are lazy, incompetent and indifferent. But they might be against a very powerful and public group of people who can sue them out of existence, so maybe that will scare other health providers into better security practices.
You hinted at it but didn't mention it explicitly: greedy. It simply costs more to have somewhat better security practices, and they don't want to pay unless they have to.
Lazy indifference probably explains it more than greed I think. If they cared, a doctor could add "burn a CD and put it in the filing cabinet with the other patient records" to the job duties of their secretary without increasing their compensation. It would only take a few more minutes, and would only slightly detract from the time they spend idly chatting with each other.
More accurately, they are NOT tech professionals, the type of people who do IT for small private practices are not that good either and they really just don't know for the majority of it. You really can't expect these people to understand the full consequences of stuff like encryption, offline vs online media and more. To them, if it has a user name and password, that is safe right? Use the HIPPA lockbox software and it should be good right?
In the past before computers they would be putting these in files on a large file folder shelving units with colored folder tabs behind a counter and the only real security was a receptionist that would stop you if you tried to interact with it, and they locked the door to the office when they left. If someone broke into the office back then too, your medical records would've been stolen & unencrypted (beyond the illegibility of most doctor's handwriting) and as a society, we were ok with that security level.
You're probably right that ignorance is the root of their apathy. Hopefully with this event making the news, doctors at least in the same specialty will hear about it and do something. Unencrypted offline records physically secured in the office building seems more than adaquate in all but the most exceptional scenarios though. Maybe it wouldn't be good enough for doctors of high-value targets (celebrities, politicians, etc.) Burglars targetting medical records seems uncommon.
Harsh fines are probably the best way to make doctors care though. If they know they risk financial ruin for not securing their records, they'll have a strong personal incentive to remediate their ignorance.
I'd think it specifically of doctors who specialize in human bodies, not computer stuff. SolarWinds on the other hand could not possibly be excused for ignorance.
One of my first jobs out of college was working at a medical school. Doctors in general think computers are magic and that compared to their actual medical expertise programming is easy. I neither expect nor, to be honest, want them worrying about computer stuff. I won't try to tell them how to cure sick people.
I don't want them to be tech professionals. I want them to use the best in class tools they can get, which it turns out are also the easiest to use and often the cheapest. If this surgery practice had just kept their photos on Google Drive with GSuite admin policy enforcing 2FA, they would have been most of the way to gold standard infosec and also would have dramatically better real-world durability and availability. Any consultant could have set them up that way in an hour.
That doesn't protect against the kind of attack that compromises the end point (wait for logged in 2FA state, interact with browser in the background with exact same state in a headless mode and download), and you do not know when they set up their systems where Gsuite, 2FA & HIPPAA / UK Equivalent agreements were even available back then.
For all you know, they could have had that system too, the article does not say what it was.
There's a one-time purchase of bigger/more disks. Figure 1GB (50 20MB pictures) per customer. Just add another 2TB, then 4TB, now 8TB or bigger drive. That's about $250 or $300 each time. Double that for a sync'd drive somewhere in the office.
Now they should be doing 3-2-1 backups. With S3 they'd be paying $160/month (for storage, not counting other costs) for 8TB or $40/month for BackBlaze B2. That's 8,000 customers.
They're in England so some variance in pricing. But it would be relatively inexpensive to buy big drives, sync them to a set in the office, and back them up online. Where the doctors or whoever is running the clinics can SEE the data is still there whenever they want.
I agree that there should be increasing worry about keeping information that you don't need, whether it's intimate pictures of your surgical clients or people who bought from you 5 years ago and not since. But it seems like keeping things handy will be an impulse that's hard to overcome.
TBH DVDs / Blu-Rays are too low density, expensive and labor intensive, and tape drives start at $1000 and most non tech professionals don't know they even exist. 2.5TB of 25 100GB writable BDXL disks cost about $250. A 4TB drive costs $80 and a computer to throw in 3.5" HDDs pretty cheap too.
Maybe. Sounds like their incentive will be primarily to keep _some_ records more safe. Eg i'm skeptical that this would propagate to poor people, without legislation at least.
(which isn't to say that they'd purposefully choose two different implementations. Rather, just that if i'm using poor person doctors i'm unsure they'd rise to the new "standard" of security practices)
The patient can come for a checkup or a related thing and they want to be able to easily retrieve these if they want to check something (or in case there's an issue of sorts). Having it all in a single system is the easiest way to do that.
They normally ask in the millions. Generally anyone can pay for it. You could buy it instead if you want. (There is an Auction section, but it's not open for TheHospitalGroup atm)
The amounts they ask from even small websites I find way too high. Perhaps their website asks high so when they go lower they get paid.
If you don't pay and no one else wants to pay, it goes public.
REvil .onion - "At the beginning of next week we will post the first batch of files, namely: Pacient Personal - 20гб TMG OFFICIAL Documents - 50гб"
> why was this even a target?
I assume like almost all hacks, even at the nation level, it's opportunity more than targeted. This website had a sploit, then they found real data behind it.
Yes, otherwise people would stop paying them. However, I wouldn't be surprised if once they make enough money, they do a type of exit scam: sell anything they can, then leave the business. It happens often in dark net markets.
Most stolen data is very hard to sell for meaningful amounts. Such an “exit scam” would be a waste of time, you’d make more money by just ransoming one more company.
When you’re earning (tens of) millions by extorting companies you aren’t going to be very interested in selling their data for tens or hundreds of thousands.
True, it's probably not worth the time unless they've stolen some very valuable data. Obviously things like plastic surgery pics wouldn't be worth much of anything.
Depends on the clients. I remember a case where a family that hid their daughter's cosmetic surgeries had the marriage annulled when it was discovered by the groom's much wealthier family.
So a lucrative target might be someone who traveled from outside the US to have work done to hide it, especially if they were relatively young.
It’s always possible to come up with an extremely unlikely scenario where the data would he be extraordinarily valuable, but nobody is going to bet hundreds of thousands (or millions, to actually make it worth it for the ransomware gang) to buy the data.
Maybe you're right, but it still feels to me that leaking the pictures will not benefit the scammers much. They might have connections to other people that are still in the business and they'd harm them indirectly by leaking after getting paid. Why make more enemies? Also, why put more attention onto themselves after they already succeeded? Some people have very strange reasons they do things, but I still don't think it's likely. I think these things are organized with the top priority of minimizing the risk of getting caught.
Even if they change both the identity and the target, it's less likely they are going to succeed next time if the photos were leaked. People can read about the story in the news and it can affect the perception of this entire type of extortion in general.
Until these breaches result in lawsuits and maybe even criminal charges that result in complete dissolution of the corporation to pay out, these events will never stop happening.
> "None of our patients' payment card details have been compromised but at this stage, we understand that some of our patients' personal data may have been accessed."
Reminds me of a statement put out by White Star Lines in 1912:
"None of our passengers payment card details have been compromised but at this stage, we understand that some of our passengers personal lives may have been affected."
> The Hospital Group, which has a long list of celebrity endorsements, has confirmed the ransomware attack.
This isn't a ransomware attack, they're not encrypting the company's drives and demanding a ransom to unencrypt them. Not every "I hacked you now pay me or bad things happen" situation is ransomware.
Timpy :P, your understanding of Ransomware is different to Wikipedias:
> Ransomware is a type of malware from cryptovirology that threatens to publish the victim's data or perpetually block access to it unless a ransom is paid.
If this is the definition of ransomware then I was indeed incorrect. I understood ransomware to be "threatens to perpetually block access to data" only.
REvil is ransomware that locks you out but first exfiltrates your data. Then the attackers have 2 points of leverage, lock out which you may be able to circumvent with a safe backup process but that won't protect you from the release of your data. This gives the attacker 2 nites at the cherry when trying to convince you to pay.
Both "hacking" and stealing are illegal in most countries, but they're still completely different actions: one is taking a physical object from someone, the other is sending and receiving electrical pulses trough a wire.
You wouldn't call stealing and killing by the same word, either, even though both are illegal.
Since we're discussing word choices and definitions, I'd argue that it's not stealing either if the Hospital retained possession of the data. It might be better said that they "obtained without authorization" or "illegally obtained".
What makes "stealing" particularly bad is that the rightful owner no longer has possession of their property. That's not necessarily the case with data.
This sort of thing is why people need to stop thinking that the digital world is analogous to our analog one.
In digital, information wants to be free and many kinds of resources are effectively unlimited. There is no material scarcity. Therefore, theft, in the digital world, can't be the same as it is in our analog world.
To be fair, this also applies to copyright and peoples' foolish notion that they can protect data without a great amount of preventing otherwise normal "physiological" processes. (Ironically, rather than having a wake-up moment where people realize their folly, we've institutionalized these resource-scarcity regimes into resource-abundant versions in the digital world)
To summarize, info wants to be free, and since theft requires extra effort to deprive someone of what you stole, does that definition of theft really apply here? Or does it need to change given the context? And, as a secondary point, people like to think they can protect data but their brains are stuck in our analog, resource-scarce world
When companies started restoring from their (new and existing!) backups when hit by ransomware, the ransomware authors looked at what would impact their "clients" the most -- if preventing them getting access to their data wasn't enough to make them pay up, then exposing their data and turning it into a breach that results in regulatory action helps them commercialise their "access".
I think in a way, ransomware authors are following the "free market" approach, trying to best monetise their unauthorised access to other people's IT systems. Perhaps the prevalence of ransomware will eventually help businesses to properly cost in the risk of security to their business, and get their security in order, as there's a tangible cost threat?
If somebody breaks into a psychiatrist's office and threatens the release of embarrassing or sensitive data unless there's payment, isn't that just classic blackmail?
This thread is someone questioning calling it was a ransomware attack, it was one. Being a ransomeware attack doesn't preclude it from being blackmail, and I don't think anyone you replied to has questioned the morality of it...
What you are talking about are cryptolockers and they are a subset of ransomware. Not all ransomware are cryptolockers. In this case, ransomware exfilled the data without a need for cryptolockers. They are still asking for a ransom.
Ransom usually means, "I have some(one|thing) of yours, and if you want it back, you need to pay me."
Calling this "randomware" subtly blurs the line between copying and stealing. The attackers here didn't remove access to the data (clearly stealing), they made a copy (clearly a crime other than stealing, at least in my view).
They're not using cryptography, but aren't they demanding ransom? Is the use of cryptography an essential part of what it means for something to be ransomware, or is it merely a common implementation detail?
> They're not using cryptography, but aren't they demanding ransom?
No, a ransom is a fee paid for the release of something you value. Cryptography is one way to take a user's data, and release it back to them on payment.
This is blackmail. They want payment to not release something.
To me, ransomware attacks are specifically "the malware got in and turned all my data to mush; the attacker doesn't care about my data, just that I'll pay to un-mush it."
This is "the malware got in and sent copies back home; now home base is threatening release and expecting payment to prevent it." To me, this is blackmail done via hacking, not ransomware.
Fwiw, many actors doing the former are also doing the latter. If someone paid you once to unencrypt, presumably they'll pay you again to not disclose the data. The line between those two business models is pretty blurry.
They are demanding a ransom, but Ransomware has a commonly accepted definition which requires encrypting files and demanding payment to decrypt them. [0]
They are not demanding ransom. Ransom is (per Merriam Webster): "a consideration paid or demanded for the release of someone or something from captivity".
They copied the data, and they want money otherwise they will release it. It's ordinary blackmail.
The very first sentence of that link would include this under "ransomware"
> Ransomware is a type of malware from cryptovirology that threatens to publish the victim's data or perpetually block access to it unless a ransom is paid.
Against whom? Where is the profit mechanism? Are the hackers really prepared to track down every patient and try to blackmail them? It’s like the emails you get some times from hackers that have an old password of yours and threaten to release that video of you pleasuring yourself. Seriously?
Are these photos really of interest to anyone? I think for most people you can tell if they’ve had work done. I guess the elephant in the room is breast augmentation, but I think it’s pretty easy to tell the difference between natural and bolt-on.
But is it enough that the company should worry? It’s not the Fappening. I just think so what, it’s tragic and Blackeye on the company, but it’s like stealing something with no value.
Should the celebs worry? Probably not. Should the company worry? Yes, they’ll have a name for that surgeon who leaks your medical documents and doesn’t really care enough to pay to keep your privacy. There are other good surgeons out there, probably right next door. Customers will be more likely to choose someone else.
Pretty sure the company makes money by people going to get plastic surgery. I'm not going to buy from a company where my private pictures are leaked. Reputation is valuable. The pictures might not be valuable to you, but a lot of people pay for "leaked" celebrity photos, of which the company has a lot of.
This sort of thing just shouldn't even be a viable threat. The response should be "go ahead and publish it, who cares?"
If you heard tomorrow that there were a bunch of plastic surgery before and after photos online, would you even go look? What is the threat here - that people will search the data for people they know and...make fun of them? Really?
We need a lobbying group that will strong-arm lawmakers into crafting regulation so that our personal information will be treated with at least as much care as ephemeral credit card numbers.