Hacker News new | past | comments | ask | show | jobs | submit login

The perfect is the enemy of the good.

This could likely be "good enough" for those that have no other options if open sourced.




That’s one of the dumbest platitudes ever deployed to deflect criticism, and I wish people would use it correctly.

“This thing has absolutely no evidence of reliability or safety in a critical environment” is not criticizing it for being less-than-perfect. It’s criticizing it for being possibly inferior to the status quo.

Here’s one simple example:

Staff gowning up for routine rounds are much more careful, and safe, than staff rushing into an emergency code. If this thing throws up even the occasional false alarm, its cost to staff (in exposure) could easily outweigh, massively, and reduced rounding requirements.

That’s not “oh, well that’s not perfect.” That’s “oh, that might be worse, masquerading as better.”

“Perfect is the enemy of the good” is a wildly irrelevant comment.


FTA:

> The deadly virus can infect you with a very small mistake. As healthcare workers, our frontline has to wander around the isolation wards to check vital signs of a patient from time to time. This task involves disposing of the protective gear after a visit. All just to check some reading on a device.

> A request from health authorities reached us to develop a remote monitoring system for isolation wards. There are expensive softwares to remotely monitor them. But Sri Lanka might not be that rich to spend such amount of money.

I think you're wrong in this case.

edit: formatting


I think you're misunderstanding the critique of the parent... In the software world we often tend to interpret "The perfect is the enemy of the good." as "If it's the only software solution it most certainly must be a good one.". But sometimes there are non-software solutions that are even better suited to solve the problem - engineering wise that MUST(!) also be taken into account.

What makes you think the team covered enough edge-cases to be "good enough" software? Do you think the presentation in a single blog post is enough information about a system to determine its quality and reliability?


> If it's the only software solution it most certainly must be a good one.

We have different interpretations. For me, TPITEOTG means:

Choose one: a solution that works well but is clearly not perfect, or no solution at all.

> Do you think the presentation in a single blog post is enough information about a system to determine its quality and reliability?

Epilogue FTA:

> We created this software on a request from healthcare staffs. It is not a commercial application. Even with this system, we strongly suggest doctors to visit their patients, take real measurements.

> As this software was developed fast due to prevailing pandemic situation, we released it with the most urgent feature monitoring. We tested this for long run, with multiple devices as well. So far it worked out well.

> It does not indicate this is perfect, we are working on improvements and fixing bugs until its very stable.

> Thus we have adviced doctors to use this with CAUTION

Many of the complaints in the OP were specious for the situation in play:

> You're storing patient information in postgres. What certifications do you have to assert that the patient data is stored securely, in line with your government guidelines on patient/medical data? There's a damn good reason this is the "holy grail" of information security certifications.

This is monitoring data from dying patients in a third world country. Do you really think that they should have spent a couple months making sure hackers couldn’t access patients’ vitals before putting into use?

> You've got critical alerting built into the browser window using JavaScript.

Yes, because that is the language of the UI toolkit they are using.

> This "alerting" is the kind of critical thing that sometimes needs immediate" intervention, or someone could die. What happens if your browser experiences a JavaScript error blocking processing? And your alerts don't fire?

The alternative appeared to be that those alerts might not be noticed anyway because they might not have the staff to gown up and go into each room frequently enough.

> What happens if they fire too often and you get "alert fatigue" because they're not tuned correctly or in line with the other alerts available at the bedside/nursing station?

What happens if the device in the room fires too often?

> How much testing have you done to correctly assert that you're interpreting the HL7 or other specs correctly? And aren't misinterpreting data for some conditions or types of individual?

They seemed to find that it was accurate enough for the crisis* at hand.

> The "throw things together quickly" startup mentality might (I stress might!) Be okay where it's the difference between nothing at all and something that can save lives, in a country like Sri Lanka, during a global pandemic, fine.

Whelp, here comes a “not perfect but good enough to use part

> <further hand wringing on future concerns irrelevant to the situation under discussion>


You gotta start from something. That is progress. You made improvement overtime. Sure, in the worse case people can die, that something you have to accept.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: