Pretty remarkable that this data was worth at least a million dollars to UCSF, but it apparently wasn't worth paying for backups, or hiring IT staff who aren't idiots.
I hate comments like this. It seems quite prevalent in the software dev field to constantly shit on other developers while having 0 information about what the source of the issue was.
They outsourced some IT staff. Reports are that this attack hit the epidemiology department, which lists a 10-person IT staff: https://epibiostat.ucsf.edu/our-team
I don't think we can assume that the IT outsourcing directly affected their vulnerability to this attack.
Probably should've been directed at the mangers/department head (not devs), but not having backups is most definitely not professional.
EDIT: The CIO or whatever the title is makes $460K per year so they should for sure know to have and be responsible for proper backup/restore functionality.
Does append only work if you have access to the raw disk bytes? Sure, the file system could enforce creation and appending only, but I can easily counter that with:
dd if=/dev/random of=/dev/sda
(Using /dev/random to give the illusion of encryption)
Yes. You would have backup agents that authenticate to a backup server. The server would only allow a specific method of sending data and the backup server would have policies about backup anti-tampering and retention. All workstations and live servers should be considered ephemeral and disposable.
Specifically for an institution like a medical facility or financial institutions, there are hardened appliances; sometimes referred to as vaulting appliances, that enforce anti-tampering to the point that the system administrators can't even delete data. You set a policy that requires multiple specific people using MFA to authenticate and authorize the deletion transaction. These are not cheap, but it's a lot cheaper than paying out a ransom and the down-time of rebuilding everything and the loss of reputation and loss of trust by board members and investors. These appliances have the bonus of enforcing many of your audit requirements around data retention and destruction.
To your example though, yes, it's not fun to manage fleet-wide, but you can boot up both Windows and Linux into ram and have network filesystem overlays that patient data could be written to. The SAN/NAS/Ceph clusters can then do backups locally and have anti-tampering in place. This is non trivial to set up correctly. That would be more resilient than depending on backups, but is much more work up front. For Windows, look into Windows 10 LTSC [1]. It can operate in a Kiosk mode and boot into memory or have hardened security options to minimize attack surface. Most Linux distributions can do this as well. Ceph can do both transport and filesystem encryption now. I will leave out the Linux examples as I doubt this is where these institutions are getting into trouble.
In my experience EDU IT tends to be an extremely small staff with a very heavy amount of duties and overtime. It frankly surprises me this doesn't happen more often.
It's also often quite distributed, which many small IT groups of 1-5 staff members that may or may not coordinate with each other or with the central IT group(s).
I did a contract programming gig for UCSF Med in 2018-2019 and they had a highly competent, mercilessly detail-oriented internal IT team trying to find HIPAA/security holes in my app on the reg. I actually left the project because this level of security wasn't in the original scope of work. Surprised if they weren't applying similar standards to their internal data and backup strategy.