"Medical Device Hacking" is a large part of how my wife and I slept at night for the last few years, based on the open source Nightscout [1] project which interfaced my kids' Dexcom CGMs to our smartphones. Continuous Glucose Monitors allow T1Ds to track their blood sugar without finger sticks and are particularly useful at night for catching lows.
The Nightscout project was a strong motivator for Dexcom to push harder on developing and improving their remote monitoring solution. A solution which could easily have languished in R&D and FDA approval hell for another 5-10 years got to market much more quickly once users were out there DIY'ing their own remote monitors.
Additionally there are a lot of "mis-features" that are coded into T1D products, presumably foisted on companies by the FDA to avoid liability, but which ultimately are a mild form of torture on end-users day in and day out. Having an option to turn to open source alternatives when the incessant alarms which can't be disabled are driving you mad is better than throwing out the device altogether. For those who haven't used or know anyone who has a pump, pod, or CGM -- the problem is that the devices are subject to dozens of variables which impact their performance, reliability, or accuracy, and alarms are often "lacking context" (to put it mildly) or outright wrong.
E.g. When you've treated a low and double checked with a finger stick that blood sugar is rising, a siren going off every 5 minutes from the CGM at 4am is enough to want to remove the CGM and smash it with a hammer.
Having open source alternatives is a large part of, I believe, forcing Dexcom and even the FDA "to the table" to reconsider hard-coded patient hostile "features". It's easier to appease the lawyers and go into CYA mode, when there's isn't a strong open source competitor with 28,000 Facebook followers and a Github repo with 21,000 forks [2].
"Medical Device Hacking" is a large part of how my wife and I slept at night for the last few years
I spend a lot of time in places with little to no cellular coverage. One of the medical devices I use downloads prescription changes from the doctor's office via cellular connection. So I had to hack my device so I can make the prescription changes myself.
People need to remember that if/when a GH repo is nuked from orbit, it also nukes ALL forks done in GH.
If you download, and then re-upload into your account, then it's not a "fork" per se, and won't disappear if/when the banhammer comes down.
Then again, this is the problem of centralization of a decentralized protocol! You are putting people whom dont have your interests at heart in control.
A large percentage of Dexcom developers have had their lives impacted by Diabetes in some way. This and plain old competition are the biggest drivers in their features and usability in my experience. You have to keep in mind like most businesses they have a long feature pipeline in development that is not public knowledge. If you release a feature and see it mirrored by someone else shortly after it's as likely that you both thought of it, and you just beat them to it than it is that they were inspired by you. Especially when one of you has a much shorter time to market thanks to no FDA oversight.
I’m sure the Dexcom developers are good people. And just look at their stock price — things are going well for them since they released the G6, which, well, actually works most of the time!
Their software is still doing trivial things badly and they deserve to feel ashamed and embarrassed that Nightscout could relay an integer value over the Internet to a smartphone years sooner than they could.
I’m very thankful for Dexcom. They’ve done great things for T1D management. They could do much, much better. They are only scratching the surface of what a CGM system should do, never mind a closed-loop system of which they will be an integral part, and dare I say, should be in the drivers seat to a dumb-pump following their orders.
> Especially when one of you has a much shorter time to market thanks to no FDA oversight.
They're fundamentally different markets, though. The device manufacturer is out to turn a profit on hardware, the open source programmer is out to improve their own individual experience.
Why should the second individual be subject to FDA oversight? I mean, I'm glad the FDA exists, but their function is to regulate the overall market -- not make it harder for me to make my _own_ healthcare decisions.
> Why should the second individual be subject to FDA oversight? I mean, I'm glad the FDA exists, but their function is to regulate the overall market -- not make it harder for me to make my _own_ healthcare decisions.
Regulatory capture from the ADA..
Yes, the FDA does make it harder/impossible to treat yourself. You are an idiot as far as they are concerned. And the doctor is the holy grail of decisions. And even how they come to a decision is 'holy' knowledge.
I should be able to go down to a drug store and buy most drugs (ideally all, but another story) and administer them to myself. I should be able to treat myself. But all that's locked away behind one of the biggest paywalls we have in this country.
> Yes, the FDA does make it harder/impossible to treat yourself. You are an idiot as far as they are concerned.
FWIW, the FDA doesn't even think about the end-consumer. They're the government's labelling-standards body. They just care that:
1. if you sell a product labelled 'X' (e.g. "milk", or "ibuprofen"), then it should contain only the ingredients—and the concentrations of such—listed in The Big FDA Book of What Products Labelled 'X' Contain. (This covers both the "no salmonella" cases, the "you can't call Cheez Whiz 'cheese'" cases, and the "beef withdrawn from the market for containing more iodine than beef usually contains" cases.)
2. if you make up a new product 'X'—really, a new product label 'X'—then "FDA approval" just means convincing them to add a page to their Big Book. You have to write that page: you must claim all the expected effects of consuming an 'X', and exhaustively list all potential side-effects of consuming an 'X'. Then, you must submit evidence that proves to their satisfaction that your reference-product for the label 'X' has all of those effects you listed; and that it has no other side-effects than the ones you listed.
Food manufacturers usually just have to deal with #1. Drug manufacturers have to deal with #2 and then #1. (Or just #1 if they're making existing drugs.)
The FDA was, for most of its life, just about #1: enforcing product integrity. They also did #2, but #2 wasn't a big deal—getting approval from the FDA for a new drug wasn't supposed to be hard or even expensive, as long as your drug really did something. It could even have disastrous side-effects. If you thought it was still marketable despite those, then you could just tell the FDA about them and they'll approve it. (See: all chemo drugs.) Just give the FDA a proven-accurate page for their Big Book, and they're happy.
But then the pages of the FDA's Big Book began to get taken as truth by various other standards bodies, that regulate what can or cannot be sold (sold at all, or sold over-the-counter, etc.)
And, because of that, manufacturers wanted their page in the Big Book to list great effects and few side-effects. Because then the barriers between them and their market are lower.
And what this means, is that that manufacturers started lying to the FDA, submitting a page describing what they wish the product was like, rather than what it is like. That's the only reason the FDA ever "does not approve" (note: not "rejects", just "does not approve") of a new label—the manufacturer can't prove their claims. I.e., the label isn't true.
Thus began the adversarial and expensive relationship between modern pharma companies and the FDA: the pharma companies want to make everything OTC and want to make a million claims about what each drug does; and the FDA just stands there, shakes its head, and tells them to come back with numbers proving their claims. And then the pharma companies burn through millions/billions of dollars trying to "prove" things that they know aren't true. Until they either eventually fail and just register a realistic monograph; or, rarely, they succeed (due to p-hacking) and end up with a drug that now is being taken by all the wrong people for all the wrong reasons.
Hate the ADA, or the DEA, or any number of other groups that use the FDA's Big Book, but don't hate the FDA themselves. All they're doing is taking a set of words (like "milk" or "ibuprofen"), defining them precisely, and then requiring companies that use those words in their marketing, stick to the definitions those words have in their Big Book.
really appreciate your comment. want to learn more about this:
...presumably foisted on companies by the FDA to avoid liability...
because I had thought the foisting ran in the other direction. I had been under the impression that the companies want to avoid liability so they nudge the FDA into creating a regulation that supports what they wanted to do anyway. not sure.
As a medical device sw developer, if doing your risk analysis, you notice that if for some reason the user miss something important, it will create a risk for the patient, the easiest (and cheap and lazy) solution is often to add a blaring alarm or scary pop-up to ask the user to confirm/check.
This way you can say that you have a mitigation, so the FDA is happy, you are happy because you can sell your product.
The downside is that it creates a horrible user experience most of the time.
This is unfortunate but this is a result of incentives of the different actors.
Another slightly related issue is that, you'd rather restrict what you user can do, because you're sure it will be at least safe. So often doctors can be frustrated because the system is not permissive enough. Most of the time, as a vendor, you prefer to sell a clumsy system that facing the risk of having a recall.
> to add a blaring alarm or scary pop-up to ask the user to confirm/check ... The downside is that it creates a horrible user experience most of the time
yes. at a medical data conference I once heard a doctor say that 70 - 80% of the alarms in the ICU where she worked were routinely ignored
Agreed. I had the G4 but eventually quit because it was sounding false alarms frequently. The later versions are more accurate but I question whether I would participate if their programs get more closed off. Also don't want to use a closed source mobile app without knowing what personal data might be getting sent out.
As someone working in medical devices, the FDA has really stepped up their game. They have issued a draft guidence on cybersecurity expectations[1] and are now rejecting submissions that don't adequately address security for internet connected devices. There is still a long way to go, but I think change is happening in the field.
Unfortunately, many large and traditional medical device companies did not incorporate security practices in their design and development phases nor did they stay vigilant in the post market phase with regard to the latest and greatest security updates. It is only in recent years and due to many embarrassing news article reporting on vulnerabilities in pacemakers and and infusion pumps that forced these companies to take more actions. But even then they are still tailing behind other industries especially in the consumer and financial sectors.
It's also worth to mention that FDA's focus is different from HHS or other agencies. While other agencies care about consumer data integrity and patient health information (PHI) in the case of HHS and HIPAA regulations, FDA is focused solely on patient safety and clinical efficacy. FDA hold medical device companies accountable only if vulnerabilities lead to patient safety issues within the risk management framework. If medical device companies show in their hazard analysis and risk management file, that it's unlikely for vulnerability to be exploited where patient safety is compromised, given the clinical controls and intended use environment, then they don't have to act on them.
You can see that in the FDA Post market guidance on Cybersecurity where they show a chart for Controlled vs Uncontrolled vulnerabilities. So its not uncommon to see scenario where you see a high CVSS score for a vulnerability, but for a medical device intended use in a hospital the manufacturer claims that according to their risk management file, the same vulnerability is controlled and thus its ok not to take any additional measures.
Your point about FDA's mission is paramount, but in this case it extends to HHS (its parent) as well. It'd be foolish to expect large (calcified) government agencies to, of their own accord, take on tasks so far outside of their core competency. The FDA primarily deals with substances that we put in our mouthes (although of course they do have responsibility for medical devices). Security engineering just isn't at the core of their mission; this is the kind of thing that ideally could be outsourced to a better-equipped agency.
I was nodding in agreement for a few seconds. Then I realized I'm not perfect. If I hack my own device I'm going to make a mistake and kill myself. Since that isn't a desired goal I don't want to be tempted.
I want the device to be perfect so I don't have to think about it. Some devices should just work and not be thought of. (I include the refrigerator in this group)
> I want the device to be perfect so I don't have to think about it. Some devices should just work and not be thought of. (I include the refrigerator in this group)
Yes, but you are speaking in ideals. In the real world, that is incredibly rarely the case. So the choice is between a locked-down, heavily imperfect device or an open one that comes with that risk, caveat emptor.
It depends on the device. My company produces pacemakers and neuromodulation devices. I wouldn't want to hack the pacemaker but if I had a neuromodulation device I would totally hack it.
When an article such as this mentions hacking, they don't mean your version of hacking. They mean the spooky news definition of hacking where there's a stock photo of a guy wearing a hoodie (and a balaclava for some reason??) typing on a laptop with a glowing green matrix waterfall on the screen. They mean there's no precaution taken whatsoever against malicious access, which is certainly a bad thing. Even the most radical FOSS guy who built his own phone out of spare parts and has a bumper sticker which reads "my other computer is your computer" is going to put a password on it.
There are pacemakers out there using NFC or some similar tech to talk to an external processor and battery pack. I bet the security on those is utterly tragic. The relevant regulatory bodies are absolutely not going to do a damn thing about it until there's a body count--of that much I am 100% certain.
Murder via medical device malware sounds like the plot of a rejected Gibsonian short story, but stuff like that will happen in our world. Somebody is eventually going to think that might be just the way to off grandpa so they can hurry up and get that inheritance.
So many years ago I started my career at a biomedical manufacturer that sold EEG, EMG, polysomnography and transcranial doppler devices. Most of these were built on some variant of UNIX (SunOS, Unixware, QNX) and were locked down to be pretty secure once networking within hospitals became more common.
The new CEO listened to what medical departments wanted, and that was a PC that could perform the clinical functions but also run MS office. So the entire engineering department spent a couple years porting everything to NT.
Then you had viruses taking out diagnostic devices. So virus scanners were installed. Now you had latency issues between the amplifier and ADC conversion and display or other issues (I don't remember all of them to be fair).
Keep in mind that some of these devices were used in the OR. Now, I'm not saying that Windows was 100% the problem, certainly the idea of multi-use contributed, but we never had any of these issues when we rolled out operating systems that locked down what you could do on the device (technicians couldn't even access the OS unless we told them how) and that we could easily secure.
Has there ever been a case of a medical device being hacked to do realworld physical harm? Considering the billions that will need to be spent to secure future devices, are there not perhaps other areas of healthcare where the same money could save more lives?
My guess is that yes, absolutely, but very few people know about it / a Doctor or nurse was blamed.
Medical system security does not seem very good. When I was operating in the area a while back, one comment I kept seeing was similar to yours. "Yes, the security is bad, but the good these devices do outweighs the bad."
I agree with that, but my follow-up has always been, why can't these devices continue to help patients, but in a secure way? The manufacturers really don't want to spend the money to try and have some form of a security posture?
Rhetorical question. At the end of the day, my pessimistic view is that nothing will happen until some firm finally proves that there has been a high profile attack, there is an ensuing media firestorm, and the regulation process starts happening.
The problem with them is the same as DRM essentially - you have to keep it accessible to everyone and not accessible at the exact same time. The key management is a nightmare.
I believe the best compromise would require forward thinking leadership and design. Make medical devices that must or would be served by communication and control short range by design. Ideally it could be turned off by the patient but close range enough that a doctor can access it while the patient is in no position to assist. The added danger is minimal given that anyone that close who wanted them dead could just murder them in other ways.
Have a centralized registry of valid public keys - there are debates about who should have one but that is a whole other topic. The point being that nonrepudiation - an audit trail will be left which means in cases of malfeasance the entity corresponding is the one responsible - either directly or by letting their key get compromised. The practical pain is the logistics of course.
Medical device security was pretty decent in 1997 at the company I worked for. Until we ported everything to NT from a variety of *NIX OS's. Then it became non-existent.
> I agree with that, but my follow-up has always been, why can't these devices continue to help patients, but in a secure way? The manufacturers really don't want to spend the money to try and have some form of a security posture?
I speculate the truth is benign; security isn't a top feature for many development shops.
> why can't these devices continue to help patients, but in a secure way
my guess is that they are subject to the usual security vs convenience trade off, that more security means more time and money spent gaining access to the devices for legitimate actors (e.g. nurses) which means some additional patient lives will be damaged or even lost.
of course, that may be well justified depending on how much dangerous or malicious hacking there is
It could be possible hackers simply haven't turned their attention to exploiting these devices yet.
There is always a time before a vulnerable system gets hacked, and there is always a person doing the hacking. Maybe this article on HN, some tweet, or some other comment on the internet will prompt someone to break into a IOT medical device and cause harm to someone. Who knows when it will happen.
That said, I wouldn't understand the rationale behind someone who knowingly hacks a medical system for malicious intent. What honor is there in turning off someone's pacemaker? That just seems like murder but with a technological twist.
All nations are interested in the ability to remotely control the pacemaker or the leaders of all other countries. We should assume that Russia, China, and the US are all able to pull it off at will if it is possible.
Would they every use this? It is an act of war, and they risk the replacement leader being "worse". However it is something they would like to have. Of course once someone starts using this all medical device makers will have to take security seriously, which enters an arms race.
The only protection most people have is we are not important enough to target. However if you are a world leader of some sort you are always at risk that some other country will decide the best way to deal with you is kill you. Vulnerabilities in your medical devices is just one possible choice such nations have.
Then again there are lots of other choices for murder. Poison has been used going back thousands of years has the useful property that you can make it look like an ex did it thus avoiding war.
"That said, I wouldn't understand the rationale behind someone who knowingly hacks a medical system for malicious intent. What honor is there in turning off someone's pacemaker? That just seems like murder but with a technological twist.
"
Somebody may do this but with current devices it would be difficult to do this on large scale. Usually you need some level of proximity.
So far, bugs in medical devices have been documented that have led to death or physical damage. Bad radiation doses, retina burning laser powers, etc. Given that most of those devices run on MS-Windows, I'm still more worried about bugs than attacks, but indeed it would be trivial for bad guys to target you if you need to be hooked on such devices...
Imagine, for instance, manipulating medical images to make a prominent politician (and their doctors) believe they have a dangerous tumor in their amygdala, making them undergo radiotherapy that fries half their amygdala, turning that politician into a liberal.
Moral implications aside, I can see a reason to explore that idea ;-)
"Individuals with a large amygdala are more sensitive to fear...which, taken together with our findings, might suggest the testable hypothesis that individuals with larger amygdala are more inclined to integrate conservative views into their belief system."
Fascinating. I'm going to avoid going off-topic any further, though.
Security should be the default. I don't think it will cost much more to make them secure relative to what medical hardware already costs; adding encryption to the microprocessor in a state-of-the-art implant isn't going to be a hefty line item on that particular BOM.
Think of it this way--any life-critical machine should have at least some consideration payed to security during design. Even non life-critical machines need security, so is it acceptable when we entirely ignore it in the whole field of medicine? If we leave it that that way, then yes, be utterly certain that it will be exploited somewhere by somebody. All information security paranoia is proven correct on a long enough timeline.
Ask any infosec professional--does _____ system really need security? The answer is going to be yes, regardless of what's in the blank. We've learned this many times with many devices which we used to think nobody would ever bother to tamper with or that couldn't be made to do anyone harm. Every time our hubris brightly illustrated and we finally realize that yes, every networked device has the potential to be crucial to somebody and it all needs at least some consideration toward security.
We have thought about this in my company a lot but it seems there is no money in hacking devices and it usually doesn't scale well. Some person may try to inflict damage on people just for the heck of it but that will probably be isolated cases. It's potentially much more profitable to hack patient databases but even there there are probably better targets in other industries.
There are still medical devices found on the internet. I don't think the FDA can regulate that away, we need active efforts to locate and secure holes in critical systems.
The Nightscout project was a strong motivator for Dexcom to push harder on developing and improving their remote monitoring solution. A solution which could easily have languished in R&D and FDA approval hell for another 5-10 years got to market much more quickly once users were out there DIY'ing their own remote monitors.
Additionally there are a lot of "mis-features" that are coded into T1D products, presumably foisted on companies by the FDA to avoid liability, but which ultimately are a mild form of torture on end-users day in and day out. Having an option to turn to open source alternatives when the incessant alarms which can't be disabled are driving you mad is better than throwing out the device altogether. For those who haven't used or know anyone who has a pump, pod, or CGM -- the problem is that the devices are subject to dozens of variables which impact their performance, reliability, or accuracy, and alarms are often "lacking context" (to put it mildly) or outright wrong.
E.g. When you've treated a low and double checked with a finger stick that blood sugar is rising, a siren going off every 5 minutes from the CGM at 4am is enough to want to remove the CGM and smash it with a hammer.
Having open source alternatives is a large part of, I believe, forcing Dexcom and even the FDA "to the table" to reconsider hard-coded patient hostile "features". It's easier to appease the lawyers and go into CYA mode, when there's isn't a strong open source competitor with 28,000 Facebook followers and a Github repo with 21,000 forks [2].
[1] - http://www.nightscout.info/
[2] - https://github.com/nightscout/cgm-remote-monitor/