really appreciate your comment. want to learn more about this:
...presumably foisted on companies by the FDA to avoid liability...
because I had thought the foisting ran in the other direction. I had been under the impression that the companies want to avoid liability so they nudge the FDA into creating a regulation that supports what they wanted to do anyway. not sure.
As a medical device sw developer, if doing your risk analysis, you notice that if for some reason the user miss something important, it will create a risk for the patient, the easiest (and cheap and lazy) solution is often to add a blaring alarm or scary pop-up to ask the user to confirm/check.
This way you can say that you have a mitigation, so the FDA is happy, you are happy because you can sell your product.
The downside is that it creates a horrible user experience most of the time.
This is unfortunate but this is a result of incentives of the different actors.
Another slightly related issue is that, you'd rather restrict what you user can do, because you're sure it will be at least safe. So often doctors can be frustrated because the system is not permissive enough. Most of the time, as a vendor, you prefer to sell a clumsy system that facing the risk of having a recall.
> to add a blaring alarm or scary pop-up to ask the user to confirm/check ... The downside is that it creates a horrible user experience most of the time
yes. at a medical data conference I once heard a doctor say that 70 - 80% of the alarms in the ICU where she worked were routinely ignored
...presumably foisted on companies by the FDA to avoid liability...
because I had thought the foisting ran in the other direction. I had been under the impression that the companies want to avoid liability so they nudge the FDA into creating a regulation that supports what they wanted to do anyway. not sure.