The developer here did a good job putting this together so quickly, and it's hopefully beneficial to the hospital staff and the patients themselves. I hope they keep working to improve it, and consider open sourcing it.
The circumstances are pretty exceptional, two things shocked me a bit: 1. The manufacturer already produces remote monitoring software for the device but its _too expensive_ 2. Software like this is usually classed as a medical device, which comes with regulations (e.g. https://www.gov.uk/government/publications/medical-devices-s... in the UK)
This is (possibly depending on use) a biomedical device. These are regulated such that we pull separate cat5/6 and put up chain link in network closets. The Cisco switch in one side of that fence just vanilla corporate IT with email, EHR, Netflix etc; the same switch on the other side and all the stuff connected to it - from the bedside monitor out through the central alarm station - is a medical device.
That isolation is the presumption referenced in the recent GE vulnerability [0] and the challenges of getting bio med chocolate in the corporate (ehr) peanut butter presents significant challenges [1].
I just came here to say: Major credit to standards like HL7 (and other projects for ontologies like LOINC and SNOMED)
HL7 and FHIR are a major step in interoperability for health, and they should not be taken for granted.
I've worked with integration projects with EHR (Electronic health record) Systems, and a lot of hospitals still have proprietary formats for exchanging data with limited documentation.
If you ever work on any health related system, consider starting from those standards!
As someone who was involved in the evolution of HL7 from the policy/medical side, I can tell you:
EMR systems lock us (healthcare systems) into proprietary formats to make migration to new clients difficult; likewise, we don’t (didn’t) push for interoperability because exporting data to other healthcare systems cost us more than it benefited us (we didn’t fight it, we just weren’t gonna spend our money to make it happen).
HL7 and other interoperability tech only emerged because CMS reps made their way to -a lot- of conferences and, with varying levels of bluntness, said “find a way to improve interoperability on your own, or you’ll see it show up in federal regs 12 months from now.”
This is a change that came from active regulators doing their jobs correctly, in spite of the active efforts of the tech industry and the indifference of the hospital industry.
Would a close-up cctv display of the bedside monitor have worked for this kind of remote monitoring?
I write enterprise medical software for a living and if a hospital had told me they were desperate for a solution in 3 days, I would exhaust other options before writing new critical software.
That'd be a lot more data to push, and not be able to record specific data or alert on anything. I don't write enterprise medical software for a living, I just think hundreds (? thousands?) of video streams isn't necessarily more reliable or elegant in its simplicity than rushed out new software. There's always a (many) trade-off.
Interesting idea. I'm not sure how scalable this would be given the fragile nature of the tools.
I want to believe with enough indications of the monitoring itself (Is the screen being refreshed? Is network connected ? When was it last checked?) that this is a viable option and that would unlock a lot of possibilities.
But I can't shake off the feeling of uneasiness (What if it's not working and we don't know it's not working)
PS: I'm not bashing on what was done, just the general idea.
I really like this idea of monitoring people like we monitor servers.
I wonder what useful information you could extract from this. Like a very slight downward trend in blood pressure over a day. Could this be used to detect something like a slight internal bleed after a surgery?
The body is very good at compensating for stuff like that. A falling blood pressure is a very late signal. The body has several tools at its disposal to maintain pressure (constricting the blood vessels, slowing the output of the kidneys, increasing the heartrate, etc). Two of those (urine output and heartrate) are already trended throughout the day for hospitalized patients.
Our hospital actually records the patient monitor data of ICU patients and analyzes them for scientific purposes. And I agree, there is a tremendous amount of information in it, although bleeding is detected otherwise faster (blood gas analysis e.g.)
My thoughts too, but I'd not be surprised if they didn't know these tools existed. From my experience, Prometheus and Grafana (and monitoring practices and tools in general) are absolutely unknown to most developers.
As others have stated this kind of solution would not be accepted in europe or north america, but as the blog post stated this work was requested by local health authorities. If a place can not afford a solution developed by companies in the field I think this can be better than nothing. At least it is something that can be improved. It's a shame that solutions exist but there is not enough money to buy them.
It's as though no one commenting negatively actually read this article.
> We created this software on a request from healthcare staffs. It is not a commercial application. Even with this system, we strongly suggest doctors to visit their patients, take real measurements. As this software was developed fast due to prevailing pandemic situation, we released it with the most urgent feature monitoring. We tested this for long run, with multiple devices as well. So far it worked out well. It does not indicate this is perfect, we are working on improvements and fixing bugs until its very stable. Thus we have adviced doctors to use this with CAUTION
It's almost as if the options were either "Do nothing" or "Do something", and they chose to "Do something." This does not sound at all like someone just decided to build their own version of a software they already had access to. It sounds very much like they could either make this software or try to get along without it.
I think people just don’t understand the budget difference between countries - Sri Lanka has two orders of magnitude less spending on healthcare per person than the US (Sri Lanka: 160 per person, US: 10200 per person, 2017 [1]).
2 orders of magnitude less means there is a lot of making do with what you have.
I understand the desire to "do something", but the certification and testing process for something like this is how you find out if this actually improves patient outcomes or not. Sometimes "doing nothing" in a certain area gives a better outcome.
I'm glad they're advising doctors to use caution.
I have some architectural concerns about how it's built (a browser's javascript engine is not reliable enough to be used as a safety-critical alerting engine!), but I can see how it's attractive.
I expect there are a lot of viable and useful shades of grey in between official standards certified commercial products and from-scratch one-off Go/JavaScript.
Even if they're going to roll their own rapid uncertified solution, they could have still done so with more reliable and proven technology than what they appear to have chosen to assemble this system.
Many components for making safety-critical systems are commonly available in non-certified variants. These could be used to assemble a viable system, but would of course lack the process, practice, and evidence for certification that would normally be required. But, at least the technology choices would have, at a minimum, "proven in use" benefits if one is going to otherwise punt on the procedure & practice rigor that is typically applied.
Given that most of the components to build a safety-critical system are commonly available, or their non-qualified kin are, it doesn't make much sense to me why if you were tasked with putting this together quickly and cheaply, but were still taking serious the safety-critical nature of it, why you would cobble it together with the stack chosen here.
Even if the choice is "Do something" or "Do nothing", there's more than one way to "Do something", and this seems like a particularly odd way to have gone about it all things considered.
Sure. You can get application boards with ARM Cortex-R or Infineon AURIX MCUs, there are numerous free and/or open source RTOSes (some which have commercial variants where you pay for the certification evidence), there are certified compiler toolchains that are forked from specific versions of GCC, there are pre-existing safety-critical systems which already incorporated various open source SOUP (software of unknown provenance) that you could use knowing someone-somewhere made a successful certification claim for it (proven in use, etc.), there are approved commonly available languages (C, C++, Java, Ada, etc.) that are spelled out for use at different SILs in IEC 61508, and so on.
That's not to say that the use of any of these things would necessarily result in a qualifiable system (it wouldn't), but you'd at least be in the same general vicinity of what is usually used/required to build such a system.
But, let's consider even the simplest aspect specific to the system this article is about. There are already multiple open source HL7 parsers that have been in-use, tested, and maintained for quite some time. What was the point of reinventing that wheel here, and if one was going to reinvent that wheel, then why not in a way that improves assurances rather than reduces them?
I guess I side with the OP a lot more than you because I suspect none of the things you listed would've been obtainable/actionable in even close to as much time. Resources (knowledge, experience, money) are not infinite, nor are they irrelevant.
I'm not sure I understand what you mean. Which things I listed are difficult to come by? With the exception of the SBCs/application-boards (which may need to be ordered like any other piece of hardware), the rest of it is just as downloadable and available as anything else used in the OP.
But, even if one concedes that the technology stack they chose was somehow the only one available to them, then what level of rigor should be applied to adapting a stack that would typically be considered unfit for this task under normal circumstances? I couldn't find any mention of their testing methodologies.
Knowledge/experience/expertise are not readily available. And I would find it hard to believe that a person would be able to churn out a piece of technology worth using, first try, with tools they've never used before. So in terms of your "hard to come by" question, I would say "A depth of experience at a moments notice" as a response.
"Crisis" seems to be a scenario that your methods might not work well in. Groups of people were at risk of death. Would implementing a solution increase their chances of death? If you're continuing to monitor patients as you normally would without this tech, isn't worst case just "doesn't improve"? And isn't best case "We caught more problems than we would by normal monitoring" ?
I can't really tell what you're advocating here. Using both unproven/unfit technologies and foregoing safety processes is worthwhile risk because the people doing it also lack the expertise to do otherwise?
The "worst case" that can come out of false positives & false negatives in safety-critical conditions has a bunch of dimensions depending on an enormous number of environmental and contextual conditions. There's no PFMEA nor DFMEA supplied in the OP to indicate how one should consider their choices.
This is really fantastic to see. I've been involved in contracts using HL7 protocol, and it's always a big and expensive beast with proprietary connectors. The fact that you figured it out so quickly is awesome.
Hats off to these students. Great job under time pressures and in realities of an environment in which one can't just spend your way to a solution. If only more people spent time building useful things like this instead of yet another cat monitor, new ways to waste more money without leaving the house, and other non-life-enhancing trivia. The collective skills out there are almost unlimited.....especially the incredible people who post and comment on HN.
Please everyone reconsider the uses to which you put your amazing abilities. Code could change the world if we built more useful things, less blogs, less shopping carts.
Closing point.... Once a project is dealing with hardware it's 20x harder. But the outcome is better. In code as in life, when faced with easy choice or hard, always take the hard one. It's how we grow as people and extend our skills.
Are there any similar monitoring (hardware) devices that you can buy just for yourself? (aka you don't have to be a hospital or talk to a sales representative to be able to buy one)
I don't care that much if it has a display, just being able to access the information similarly to what's described in the article would be enough.
Withings (was bought by Nokia but now Withings again) have a set of off-the-shelf tools you can buy. Scales, thermometer, blood pressure monitor, sleep tracker.
If anyone need to view or record data from multiple medical devices, e.g. for research purposes, Vital Recorder is a quite well documented solution. It is free of cost, but unfortunately it is not open source.
They also have a solution for an ICU like monitoring system with several beds, but I have not used this functionality.
I assumed they used the words "real time" in a patient monitoring sense to mean Doctors being able to monitor patient data without having to go between rooms themselves or wait for reports.
Kind of how you'd say you have a dashboard for your business to see metrics in real time (as opposed to some kind of daily report).
I think that usage of real time is very common outside of a pure engineering sense, so you're being a bit of a stickler by being annoyed at this specific instance ;)
> Kind of how you'd say you have a dashboard for your business to see metrics in real time (as opposed to some kind of daily report).
I think calling it a "streaming" or "live streaming" interface is a better descriptor than "real-time".
> I think that usage of real time is very common outside of a pure engineering sense
Maybe, but we're talking about the software engineering world here aren't we? I think we should seek to be precise with our terminology and try to avoid unnecessary ambiguity.
I agree with you and have upvoted your comments, however, I'll also note that we distinguish between different disciplines in software engineering. "Real-time" has a very spceific definition in embedded systems and OSes, but this is clearly high-level application more tied to business logic than, say, rocket control systems.
I think both uses of the term are acceptable in their different contexts. (But yes I also agree "streaming" is a very good term.)
I don’t see why you had to call him out. “REAL TIME” is also an engineering term with a strict definition. If You paid me for a real time stock ticker and you later found out it had a delay of a second or two you would want a refund.
Asking for a real time stock ticker is also not using "real time" as a term of art, having a 2 second delay in stock data might be a failure to deliver on application requirements, but it is not necessarily a failure to deliver a "real time" system if the 2 second delay is strictly reliable.
The non strict usage is far more common then the term of art. One would be well advised not to assume "real time" is being used as a term of art unless specifically called out, or clear from the context.
I'd say common usage of the phrase "real time," means with respect to human perception, and that is how its being used here.
> I'd say common usage of the phrase "real time," means with respect to human perception, and that is how its being used here.
So do you consider YouTube a real-time system? After all, images are moving on your screen "in real-time" from the user's perspective.
It seems to me that these are all streaming systems, and when the data is "live", then it's "live streaming".
Even colloquially, I hazard that "real-time" has some implication that there's some correspondance between your temporal reference frame and the original temporal reference frame. I think the engineering definition just made that correspondance precise.
I'm not speaking to what I consider "real time," I'm speaking to how it is commonly used as a phrase . But given that...
Yes people do say "real time playback" with respect to video. Generally, however, that is only noteworthy to call out when its unusual. Such as "real time 16k playback."
You're not wrong that the distinction isn't terribly important for this story, however that term absolutely has meaning in the world of fintech and places reporting on financial news - near/soft real-time are very distinct from real-time.
It's been 5 years since I've worked in that industry, but I still view the use of "real-time" the same way and can understand some people's issue with the term in this case.
This comment always shows up and I find it strange. Surely engineers can realize that while "real-time computing" has a strict definition, the phrase "real-time" has much broader meaning and it bumps up against the software world very often.
Sure, but this is a engineered solution we're talking about. Don't you think it should use the engineering terminology?
I'm not calling out this person specifically, I'm just hopeful that we can use more precise terms instead of overloading existing ones. I think "streaming API" or "live streaming" is a better fit for these purposes.
If we're going to play that game, then "real time" may never have come to refer to the hard system guarantees as you think of them.
The earliest computers didn't even include an operating system, never mind one with real time guarantees as we think of them today. You submitted your program on paper tape (later, punch card.) and hours later, you'd get the output. This was the batch computing era. The idea of getting a (soft) real time response from the computer (which was the size of a room), was unheard of.
As computers evolved, real time systems came to refer to computers that operated on the opposite of batch computing. The user could interact with the system in "real time", rather than having to wait hours after submitting the job for the results.
Later on came the distinction between systems with hard real time (vs soft) guarantees. Computers didn't have the same CPU speed that we have access to today, so even audio processing required special real time guarantees.
Eventually, "hard real time" was shortened to "real time", which takes us to today.
Systems which have existed since the dawn of computing such as banks back in the 50's, or the IRS, still hold onto the batch vs real time nomenclature, possibly because batch processing systems still exist. Banks still process records nightly in batches, some even still run on newer versions of the same IBM mainframes, though some are starting to move to "soft" real time systems. (If you ever wondered why it can takes days for a check or bank wire transfer to clear, this is why.)
So your argument is that the term "real-time" didn't exist back in the mainframe and batch processing days, so we can't give it a technical meaning now?
Look, this is simple: literally all of our technical textbooks and engineering manuals agree on what "real-time" means.
If people were running around calling their smart phones "desktop computers", or "laptops", or "mainframes" you'd be very confused I think. Sure, smart phones and desktops these systems have a lot in common, even run a lot of the same code, and sure our phones have more computing power than mainframes back in the day, but overloading this term conveys literally no benefit and actually creates confusion.
Of course I'm pissing into the wind here, but whatever, if I've convinced even a couple of people to be more precise with their terminology, I'm fine what that.
> real time systems came to refer to computers that operated on the opposite of batch computing. The user could interact with the system in "real time", rather than having to wait hours after submitting the job for the results.
As someone who grew up with those systems, this is a major mis-statement of the terms.
In the early days the terms were "batch", "online", or online's near-synonym "interactive".
Real-time has always meant bounded-latency and guaranteed execution since the earliest days, to actual computer systems engineers.
It is a true statement that many people use the term differently today - much to the detriment of precise communication between humans. But there was never any confusion about the term until perhaps the 90's and widespread interactive screen-based applications.
The real problem in speech is not precise language. The problem is clear language. The desire is to have the idea clearly communicated to the other person. It is only necessary to be precise when there is some doubt as to the meaning of a phrase, and then the precision should be put in the place where the doubt exists. It is really quite impossible to say anything with absolute precision, unless that thing is so abstracted from the real world as to not represent any real thing.
Richard Feynman
I agree that this is soft real-time but think that the usage is pretty clear in the web world. I'm not sure it's a bad thing that is taken on a different meaning in a different context.
I view real-time as "the clients reflect the true state of the world without taking action." So if something changed the state of the world, all clients paying attention to that should also be updated.
I’ve been working professionally in the web for decades and I can’t remember anyone ever using the term “real-time”. I hear “live” a lot. Live updates.
> I view real-time as "the clients reflect the true state of the world without taking action." So if something changed the state of the world, all clients paying attention to that should also be updated.
My point is that when you say real-time in the web world, you can assume "soft real-time" because of the context. You would use specific language like "hard real-time" if it was so.
Saying context doesn't matter is just ignoring how everyone else uses language and saying that your view of it is right.
I meant that "live streaming" doesn't have a different meaning in the web world.
So now we have two terms referring to exactly the same thing, which doesn't resolve the term conflation problem I initially wrote about. The "web world" context isn't that different from the larger software engineering world that they should repurpose engineering terms.
> Saying context doesn't matter is just ignoring how everyone else uses language and saying that your view of it is right.
I am saying that web programmers do tend to use professional language wrong, that it's unfortunate, and that we should try to correct it when it happens. I don't think the first part is controversial, but apparently trying to insist on precise engineering terminology when talking about engineering systems is controversial.
This page classifies a broad definition for real-time and then breaks into different categories (hard, soft, fail-safe, fail-operational).
Is the default definition of real-time "hard real-time", or does it always need to be defined by its classification?
My point around context is that no one in the web world is saying that soft real-time systems are hard real-time systems. That would be blatant misappropriation of a term. They are saying that "real-time" defaults to "soft" in the web world. Just like "real-time" defaults to "hard" in the embedded world.
I don't know if I would classify this project as the embedded world (despite it using hardware), because it's consuming a stream of packets and isn't at the actual physical device level. I may be misunderstanding what they're doing, but I believe this to be the case.
The broader reason why I commented in the first place is that your comment feels like gate-keeping of a term when most people are going to quickly pick up on the context of how the term is being used. I understand and agree with your point around hard vs soft, but I don't think it's as clear cut as you seem to think it is regarding what the default classification is in different contexts.
Downvote because 5 minutes is absurd as a STW GC time, for ANY gc language. Heck, I wouldn't accept 5 minute lag in a python prototype.
Even in the dewey eyed early years, Go saw STW latency in the hundreds of milliseconds. By 1.8 it's in the hundreds of microseconds.
Your hard-real-time is out the window as soon as routing and UDP is involved. The intent is "real time bounded by the amount of time it would take doctors to react to a code blue (heart issue/failure". One second of reaction (thousands of gc's, dozens of health checks, a near eternity in program time) isn't going to make a difference when human reaction time latency is hundreds of milliseconds.
Call it "live streaming" if that sounds more objective, but conceptually if the clients (humans) don't perceive a difference in state, it's real time (to them).
Save the milliseconds-hard-real-time talk for rockets and cars.
Would love to see a worst case analysis for go though. (My worry is, that long running applications are more prone to extensive gc stw, since os starts to swap pages, and gc tries to access them, might take a while)
Of course if you can guarantee that go always has stw gc for less than a second (or even 10 seconds) that would be great (and make go a hard real time language, of course not a very fast one)
5 minute stop-the-world garbage collection? Go has done quite a bit to improve garbage collection times. I don't honestly believe that it could get anywhere nearly as bad as you imply it to be.
I apologise for this comment, you've done some great coding, but this scares the shit out of me.
There's a reason medical certifications are so hard to get, and medical software is so expensive.
You're storing patient information in postgres. What certifications do you have to assert that the patient data is stored securely, in line with your government guidelines on patient/medical data? There's a damn good reason this is the "holy grail" of information security certifications.
You've got critical alerting built into the browser window using JavaScript.
This "alerting" is the kind of critical thing that sometimes needs *immediate" intervention, or someone could die. What happens if your browser experiences a JavaScript error blocking processing? And your alerts don't fire?
What happens if they fire too often and you get "alert fatigue" because they're not tuned correctly or in line with the other alerts available at the bedside/nursing station?
How much testing have you done to correctly assert that you're interpreting the HL7 or other specs correctly? And aren't misinterpreting data for some conditions or types of individual?
The "throw things together quickly" startup mentality might (I stress might!) Be okay where it's the difference between nothing at all and something that can save lives, in a country like Sri Lanka, during a global pandemic, fine.
But afterwards, this is so much junk without serious thought and time put into certifying it.
Medical, Aerospace -- really, any safety critical industry where your code working or not could mean someone is seriously injured or dies as a result -- is an industry that needs disruption, but that disruption should happen slowly, carefully, and safely.
> We created this software on a request from healthcare staff
If this is some small town hospital in Srilanka, the choice is between an unaffordable certified solution and not having any monitoring. If Medical software didn’t bleed them dry, they wouldn’t go this route.
> disruption should happen slowly, carefully, and safely
Disruption always happens this way - same way Uber broke existing laws. Yes, few people will die. But this isn’t surprising when the alternative is even worse.
> the choice is between an unaffordable certified solution and not having any monitoring.
No, this isn't _necessarily_ the choice. Without a "false sense of security" that an imperfect monitoring system might instil, you have nurses and doctors actually doing rounds and checking their patients.
> Disruption always happens this way - same way Uber broke existing laws. Yes, few people will die. But this isn’t new when the alternative is even worse.
This is an absolutely horrible viewpoint to have. People dying because of "disruption" so a few companies can make a few more dollars is _never_ acceptable.
It's funny-sad watching my fellow tech people debate civics and public policy and talk about how often "Something must be done, this is 'something', so we will do it" exhibits itself. Everyone nods or cheers as if we have some leg to stand on.
When it comes to solving technical problems? We are ever so happy to do exactly the same thing.
Any solution is better than no solution. Except when no solution causes people to stop trying to delegate an important responsibility. Which is quite frequently.
A crap solution crowds the problem space. If a better solution is possible, it now has to defend itself against the incumbent. Explain why it is more expensive, why people should be bothered to switch.
If you can't do something well, then for pity's sake let someone else try. Log away every cost of not doing it at all and then when you can justify doing it well, build your pitch.
I think we can only ascertain whether this is a good or bad thing if there was data on the amount of valid abnormalities caught by this system vs having nurses and doctors having to do rounds. We also have to take into consideration the fact that they may run out of money for disposable protective gear, or even have the amount of protective gear available for purchase drastically reduced. From his disclaimer in the post it also seems like they're using this on top of their typical monitoring so that the staff can have insight in between visits
Indeed. It’s like rubber gloves during this pandemic. People think once they’re on they’re protected - you only gain increased protection if you know what you’re doing.
You are the one who brought up “disruption”. In this particular case, someone created a free/affordable solution for the hospital. I am not sure how you can read “make a few more dollars”
Yes in the worst case people could die, that is distruption, that's the reality. The sooner you accept that the better, or you going to have a hard time.
I had the same thought. But I think it's more complex than that.
Always consider the alternative. This could be a hospital in a remote part of a third-world country. Maybe they're understaffed. How are they currently handling the task of gathering information from monitoring devices and reacting to alarms?
Maybe, their nursing staff has to run from bed to bed to check patient's vital signs and device alarms. Emergencies would frequently be missed because they are understaffed and checking is irregular. Now, you could introduce software which provides centralized monitoring. If it's introduced on top of the existing activities (i.e., running from bed to bed), it leads to a net benefit - you catch emergencies earlier and consequences of malfunctions are less severe. But if it's introduced to replace the existing activities, it may lead to patient harm.
Sure, it's self-coded, browser-based and buggy - but you always need to weigh risks with benefits, and those depend on usage context.
Of course, in most western countries, this would be completely illegal. But these are also the countries in which medical software looks like it's from the 90s, with catastrophic usability and missing features.
We need to ask ourselves: Right now, we heavily prioritize patient safety over innovation - but have we got that balance right? What are patients missing out on if we could just bring a few more of the latest advances in technology to their bedside?
You know, not machine learning, the blockchain or the internet of things. Rather things like browser-based applications which "just work" and have great usability.
Note: I'm a physician, software developer and consultant for medical software certification :)
> Maybe, their nursing staff has to run from bed to bed to check patient's vital signs and device alarms.
It feels to me like the management has misunderstood the cost of the software vs not having the software. It feels like they're saying "this software is expensive, and doing nothing is free" when they should be saying "having all these healthcare professionals spending time putting on and taking off PPE the check patients is costing us this much per year".
As you probably know, an ICU will go through 30 sets of PPE per patient per day. That's a lot of time putting stuff on and taking it off.
Sure, but there are plenty of technologies that are applicable to safety-critical systems or are safety-critical adjacent which are freely available. There are MCUs, application boards, RTOSs, programming languages, compiler toolchains, network stacks, parsers, etc. available which are the same-a or close-to those which would be commonly sourced and deployed in a safety-critical context.
So, why not use those to build the "something is better than nothing" solution?
This was a quick and dirty hack to improve access to patient data done with what was on-hand, for a constrained deployment using specific known devices. They didn't have anyone with knowledge on using any of the tech you mentioned, some of which requires spending months setting up unless you have practical experience in delivering on the platforms. Just getting a more safety-minded setup for a MCU using free software can be a harrowing experience.
And they don't have the money to just contract it out or pay for the commercial grade stuff.
They did what they could with what they had, with explicit mention that it's not good on safety and security - but it brings some benefit now.
Here in Poland, a few weeks into lockdown, nobody asked for certifications on volunteer made PPE parts anymore. Because a shoddy PPE with no certification was still better than none.
You’re right. There is no way this would be deployed in a UK hospital as it stands. It might be some of the most dangerous ideas encapsulated in code I’ve ever seen. I disagree with standards like DCB0129, but they’re there for a reason. This would not pass.
This is the comment that was in my thoughts and I failed to write it.
I really hope they 1. Open source it 2. continue to work on this throughout the crisis and get it to a state where its actually suitable for critical care, and then 3. Work on achieving the relevant certification.
It sounds (just guessing) like the device vendor sells their own software separately, and is unwilling to budge on price during this time, forcing an already stretched hospital to look for new solutions.
That’s one of the dumbest platitudes ever deployed to deflect criticism, and I wish people would use it correctly.
“This thing has absolutely no evidence of reliability or safety in a critical environment” is not criticizing it for being less-than-perfect. It’s criticizing it for being possibly inferior to the status quo.
Here’s one simple example:
Staff gowning up for routine rounds are much more careful, and safe, than staff rushing into an emergency code. If this thing throws up even the occasional false alarm, its cost to staff (in exposure) could easily outweigh, massively, and reduced rounding requirements.
That’s not “oh, well that’s not perfect.” That’s “oh, that might be worse, masquerading as better.”
“Perfect is the enemy of the good” is a wildly irrelevant comment.
> The deadly virus can infect you with a very small mistake. As healthcare workers, our frontline has to wander around the isolation wards to check vital signs of a patient from time to time. This task involves disposing of the protective gear after a visit. All just to check some reading on a device.
> A request from health authorities reached us to develop a remote monitoring system for isolation wards. There are expensive softwares to remotely monitor them. But Sri Lanka might not be that rich to spend such amount of money.
I think you're misunderstanding the critique of the parent... In the software world we often tend to interpret "The perfect is the enemy of the good." as "If it's the only software solution it most certainly must be a good one.". But sometimes there are non-software solutions that are even better suited to solve the problem - engineering wise that MUST(!) also be taken into account.
What makes you think the team covered enough edge-cases to be "good enough" software? Do you think the presentation in a single blog post is enough information about a system to determine its quality and reliability?
> If it's the only software solution it most certainly must be a good one.
We have different interpretations. For me, TPITEOTG means:
Choose one: a solution that works well but is clearly not perfect, or no solution at all.
> Do you think the presentation in a single blog post is enough information about a system to determine its quality and reliability?
Epilogue FTA:
> We created this software on a request from healthcare staffs. It is not a commercial application. Even with this system, we strongly suggest doctors to visit their patients, take real measurements.
> As this software was developed fast due to prevailing pandemic situation, we released it with the most urgent feature monitoring. We tested this for long run, with multiple devices as well. So far it worked out well.
> It does not indicate this is perfect, we are working on improvements and fixing bugs until its very stable.
> Thus we have adviced doctors to use this with CAUTION
Many of the complaints in the OP were specious for the situation in play:
> You're storing patient information in postgres. What certifications do you have to assert that the patient data is stored securely, in line with your government guidelines on patient/medical data? There's a damn good reason this is the "holy grail" of information security certifications.
This is monitoring data from dying patients in a third world country. Do you really think that they should have spent a couple months making sure hackers couldn’t access patients’ vitals before putting into use?
> You've got critical alerting built into the browser window using JavaScript.
Yes, because that is the language of the UI toolkit they are using.
> This "alerting" is the kind of critical thing that sometimes needs immediate" intervention, or someone could die. What happens if your browser experiences a JavaScript error blocking processing? And your alerts don't fire?
The alternative appeared to be that those alerts might not be noticed anyway because they might not have the staff to gown up and go into each room frequently enough.
> What happens if they fire too often and you get "alert fatigue" because they're not tuned correctly or in line with the other alerts available at the bedside/nursing station?
What happens if the device in the room fires too often?
> How much testing have you done to correctly assert that you're interpreting the HL7 or other specs correctly? And aren't misinterpreting data for some conditions or types of individual?
They seemed to find that it was accurate enough for the crisis* at hand.
> The "throw things together quickly" startup mentality might (I stress might!) Be okay where it's the difference between nothing at all and something that can save lives, in a country like Sri Lanka, during a global pandemic, fine.
Whelp, here comes a “not perfect but good enough to use part
> <further hand wringing on future concerns irrelevant to the situation under discussion>
You gotta start from something. That is progress. You made improvement overtime. Sure, in the worse case people can die, that something you have to accept.
IMHO, this would be a perfect application to build using Elixir and Phoenix LiveView. I think it would provide really robust realtime capabilities, and fits well with things like binary pattern matching that Elixir and Erlang handle well.
I personally wouldn't use web technologies to write critical medical software, but if I did it would be in Elixir or Erlang for the optimized garbage collection and supervision trees.
I agree with you on LiveView as well. I believe it would prove to be more reliable than client-side JavaScript.
Anyone have any recs for a pulse oximeter you can hack on? I’ve been looking for one that wouldn’t be too difficult to connect to. I could barely find any of the old ones let a lone something that looks like it can be borderline portable.
instant feedback is possible in Go as it compiles super fast. Also its much simpler to write cause its a small language and the concurrency primitives are so simple to understand and use. It has its problems but its a great choice here.
The most valid reason for using a language is simply because the team is comfortable with it.
> instant feedback is possible in Go as it compiles super fast
Heh, you should try a language that comes with a repl and allows you to evaluate code directly from your editor in any context and get the results back. Then you'll have experienced "instant feedback". Until then, I can agree that Golang compiles faster than other languages, but it's nowhere near instant. In some other languages, you don't really care how fast it compiles as you only do it for deployments anyways (like Clojure).
A REPL only gives you instant feedback on code that you type into the REPL. If you make some changes to three modules and you want to find out the effect of those changes then you need to...recompile the modules. Having a REPL doesn't help.
I don't find that a REPL really gives me anything that I don't get from tests in Go. The only difference in practice is that I type the code into a text editor rather than into a REPL prompt (which is actually a better experience). On top of that, it's usually good to have that kind of code saved somewhere rather than existing ephemerally in a REPL session.
Depends on how the language you use works. Golang is not made for repl usage so of course you don't really get the full power. Compare with something like Clojure where redefining a function in a repl, makes every function using that function also use the new definition. Start treating functions as just functions without any implicit state, and you can start to overwrite things in the program on the fly. Super valuable when debugging and otherwise also when creating code as you can much easier test ideas.
I don't know if it has a different word, but a repl for Clojure using a repl-workflow is very different from a repl for Golang with the normal "save -> compile -> run" workflow you have with Go.
For example, using a repl-workflow in Clojure doesn't mean you don't write code in your editor. I use vim + vim-fireplace to write my code, then highlight the parts I want to evaluate. So I get both in one. The repl is running in the background, and vim-fireplace communicates with it. Combine this with a SPA and I can have something like `(.alert js/window @username)`, highlight it and evaluate, then have an alert prompt in my browser window, which is next to my editor. Saves me a ton of time.
Yes, I've used Clojure and other Lisps before. The thing is, I don't want to overwrite function definitions in a REPL, because I then lose track of what code I'm actually running. It's much more effective just to make changes in a text editor and track your state using version control. In a language that compiles quickly, a REPL just adds an additional layer of complexity for no benefit. In a language that doesn't compile quickly, redefining functions on the fly in a REPL is a useful hack that lessens the pain somewhat.
>I use vim + vim-fireplace to write my code, then highlight the parts I want to evaluate. So I get both in one.
Sure, and with Go in VSCode, I just click 'run' above the test I just wrote. It isn't any more difficult.
> much more effective just to make changes in a text editor and track your state using version control
Statements like this makes it seem like while you probably have dabbled in Clojure and other lisps, you haven't used them substantially, as if you're using Clojure, you still make your changes in your favorite editor and code is still tracked in version control. It's not REPL _or_ code editor, it's code editor + repl backing.
Redefining functions on the fly is not a hack to lessen the pain, it's a defined workflow that makes you be able to work faster. If you're using it and it feels like something to lessen the pain, you're using it wrong.
> Sure, and with Go in VSCode, I just click 'run' above the test I just wrote. It isn't any more difficult.
This assumes a lot of things around the code that you're executing. First, you've already setup a test and it's already isolated enough. With a powerful repl + idioms, you don't need to do that, as you can execute parts of a function just to introspect that part, kind of like a debugger but works anywhere automatically.
In the end, I'm not gonna try to convince you to try something if you don't want to try it. But if you're anything like me, finding more productive ways of working is always interesting, even if they end up not being as productive as we first thought. With that in mind, since you seem to not have super much experience with a powerful repl, if you try it I'm sure you'll find what I say to be helping you be a better programmer.
>you still make your changes in your favorite editor and code is still tracked in version control.
Version control does not track which sequence of code snippets you have (re)-evaluated to get to your current repl state. I'm fully aware of how editor integration with a Lisp repl works in Emacs and (to a lesser extent) Vim.
>have dabbled...In the end, I'm not gonna try to convince you to try something...seem not to have super much experience with a powerful repl...etc.
As with most Lisp advocates, you seem unable to a accept that I have tried it and didn't like it. Please stop throwing shade.
> This assumes a lot of things around the code that you're executing. First, you've already setup a test and it's already isolated enough
A test in Go is just a function, so no set up is required. A test is more isolated than an expression evaluated in a repl.
It seems to me that you don't have much experience of writing tests in a modern development environment, and this is why manually recompiling parts of your codebase using Vim hotkeys strikes you as a good idea :)
Also you kinda don't need a repl as much with go (coming from a python background and constantly repl-dev-ing), especially with an IDE. Everything is so clear and type-safe there is little divide between the architecture in my mind and the code's behavior.
It's not that you don't need a repl. It's that you don't really get the many benefits other repl-heavy languages have, like Smalltalk and the many different lisps. You got a lot of implicit state and other non-ergonimic constructs in the language that doesn't lend itself to live in a repl for development. Also I expanded a bit more on why Golang doesn't fit for repls in general here: https://news.ycombinator.com/item?id=23053343
There's a whole section in the article about that. C would be a pretty obviously terrible choice so I'm not sure why you would consider that.
Java would be ok but Go is much simpler to get started with and to deploy. In general it is more lightweight and easier to use (unless you need generics, though Java doesn't really do those right either).
One nice thing about Go and C vs Java, is that a program can be (statically) compiled into a single, stand-alone executable that have no external dependencies.
I do not understand why can't we make a dashboard on Go with Streamlit or Dask. That will be a faster alternative. Why do you even need to spend 3 days when the job can be done in 3 hours
I do not usually write posts like this, but I would never write an hl7 parser when things like hapi https://hapifhir.github.io/hapi-hl7v2/ exist. It is not just a parser, tones of fields in hl7 messages have not obvious meanings and implications
That would have taken care of the message parsing, but the team would still need to do the concurrency work, get it debugged, and working. Also Java is not nearly as easy to deploy on-prem as Go.
Regardless of the parser, I agree. Writing your own is dangerous. HL7 is deceptively complex. It leaves a lot of room for vendors and your hospital to customize. If you write your own, you can very easily end up with a system that is boxed in to your hospital's own, or even your department's own, implementation.
The circumstances are pretty exceptional, two things shocked me a bit: 1. The manufacturer already produces remote monitoring software for the device but its _too expensive_ 2. Software like this is usually classed as a medical device, which comes with regulations (e.g. https://www.gov.uk/government/publications/medical-devices-s... in the UK)