Hacker News new | past | comments | ask | show | jobs | submit login

As with many things - fear that the AI will be wrong (no matter how less frequently this happens than with doctors) and a desire for a human to blame if there is a mistake. If a doctor makes a misdiagnosis and kills the patient as a result it is less bad than if an AI makes a misdiagnosis and kills the patient as a result. It's the one thing I don't understand about most of society.

Same with self driving cars. Self driving cars could be proven to be 100,000% safer than human drivers - but until it is legally mandated people will prefer humans behind the wheel because "what if the self driving car runs a red light and kills someone?" ignoring the hundreds and thousands of humans who run red lights and kill people.

>why we were not using algorithms to assist with diagnosis already.

On the bright side - we increasingly are! I think it's more an issue with budgeting and legal issues that it isn't as widespread.




Maybe it's not so much that a human isn't to blame in the case of an algorithm classifying something incorrectly, but that an algorithm might make mistakes that a human wouldn't.

I've read a very interesting article [0] about autonomous driving recently, a discussion about the ethics and social obstacles in the way of adopting autonomous cars. The author makes the case that while algorithms might perform a task safer than humans in total but they will inevitably make mistakes. And those mistakes may well be mistakes that are trivial for a human to prevent.

Algorithms making lethal mistakes that a human professional will be extremely unlikely to make, will be the hardest part to swallow if we adopt automation in critical roles - even if the total amount of deaths can be reduced that way.

Even if they are safe across the board, statistically proving to people that autonomous machines perform correctly and aren't prone to obvious mistakes would take decades and billions of hours of operation to be able to make a statement about their safety relative to human performance - since lethal accidents due to human error are relatively rare events. So we are faced with a hen and egg problem in that regard.

[0] http://spectrum.ieee.org/cars-that-think/transportation/self...


I believe you're right about the blame issues. I came here to post about that.

One other wrinkle... Having an algorithm strictly following rules and procedures to the T means that we're taking moral/human decision making out of the mix. This matters in situations where two different diagnoses may have the same treatment, but different outcomes on the patient's livelihood aside from their health. E.g., a human doctor may note that a tumor is just big enough to be considered cancerous, but marks it down as "pre-cancerous" in their official diagnosis. The treatment is still the same, but it keeps the more serious diagnosis off the patient's record, which can help them when dealing with insurance.

One follow-up argument to this may be that humans would have the final say in a given diagnosis, but I bet this wouldn't always be the case, particularly in lower income scenarios.

The final question to me is if the risk of the loss of moral agency outweighs the risk for incorrect diagnosis. I think there's going to be variance in the machine/human accuracy rates, and the need for a deeper understanding of the human condition.


Apples and oranges. We do have driving assistance, fully autonomous vehicles are entirely different. Low hanging fruits are being picked, not wanting a software to call the shots every time is a valid stand imo.


I think aviation is a good analogy.

Most of the automation in a modern airliner, for example, isn't there to make the plane "fly itself" from point A to point B -- it's there to do things machines are good at in order to reduce the workload of the human crew, so they have more time/energy to focus on the things humans are good at.

Flight plans, for example, are still decided on by humans, even if a programmed computer carries out many aspects of the plan once decided upon. Same for en-route deviation from the prior plan in order to either avoid bad weather or reach good weather (where "weather" is kind of a broad term, and includes things like winds which would help/hinder an airliner). Although "autoland" is a feature, it's not something that's used every time and can be moderately complex to set up, since a bunch of factors have to be decided/input by a human. And of course the final decision to land or go around is made by a human.

Which drives home the fact that automation, in aviation, is a partnership between humans and machines, with humans benefiting and being more productive from offloading some work.

A lot of fields should be looking at that as a model.


> automation, in aviation, is a partnership between humans and machines

as a lay person who's visited too many people in hospitals, this sounds like a reasonable description of the modern hospital room as well.

* bp and pulse monitors do a job which doctors and nurses used to do manually

* non invasive oxygen saturation level monitors do a job which could not be done easily in the past

* automated IV drip monitors do a job nurses used to have to do manually

* wrist bands with bar codes and an accompanying scanner coupled to a database serve as a "second opinion" or double check on the medication a patient is being given

etc


And yet there are cases where humans interfering with the automation caused a crash.


There are also cases where humans had to completely disable automation to avoid a crash. The systems are not always redundant enough to detect bad information when just two or three inputs are affected.


Yes, humans refusing to trust or work with the automation can cause a disaster. That's not an argument for going fully automated or fully human, it's an argument for developing better cooperation between the humans and the automation.


And in many cases it has been poor UI design preventing the humans from recognising or understanding what the machine is telling them, particularly when they are under the extreme stress of an undiagnosed emergency situation.


With medical issues, I'd expect there would be a point at which computers are so much more accurate that humans wouldn't be able to tell.

An analogy is instant replay in football. There are some plays that are just so close, that they can't overrule the field judge - and wouldn't be able to even if the field judge made the opposite ruling!

That being said, you're betting on humans to be irrational, and that is always a good bet unfortunately.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: