Hacker News new | past | comments | ask | show | jobs | submit login
A.I. vs. M.D. (newyorker.com)
147 points by merrier on April 2, 2017 | hide | past | favorite | 40 comments



I read the book Complications by Atul Gawande a while back and it touched on this issue. He mentioned how a computer was more accurate at detecting heart attacks than an experience doctor. He also talked about how if the computer is better than the doctor at reading things like these, it really doesn't make much sense for the doctor to have to evaluate/approve the results. Kind of reminds me of James Simons' thought process on quantitative trading at RenTech.

Sorry about the lack of technical knowledge into the CS stuff, first post here and I haven't really put in the time to learn about CS and AI yet.


> Sorry about the lack of technical knowledge into the CS stuff, first post here and I haven't really put in the time to learn about CS and AI yet.

No need to apologize, you made an interesting and relevant comment. The fact that you feel the need to apologize is more telling of the culture you perceive in this site than of your lack of knowledge in any area.


I posted this comment about 15 minutes after I discovered Hacker News. Wasn't sure on the culture here compared to places like reddit. It seems to be similar to the more specialized subreddits.


I recommend reading "Do machines actually beat doctors?" by Dr. Luke Oakden-Rayner, who is both a radiology doctor and deep learning researcher.

https://lukeoakdenrayner.wordpress.com/2016/11/27/do-compute...


Hey, thanks for liking the article. I was just looking at my blog stats and they had a bit of spike when you wrote this.

I'm just getting down to writing some posts about the big question the New Yorker article introduces, but doesn't really make any real progress with: will machines actually replace doctors.

Hope you enjoy them :)


>Can’t call me a cynical, turf-protecting doctor now.

I think the overall tone of the article is a bit harsh on the machines, especially considering the coda.



A few decades from now, 30 - 50 years at most, doctors and patients, will look back at us and wonder, why we were not using algorithms to assist with diagnosis already.


As with many things - fear that the AI will be wrong (no matter how less frequently this happens than with doctors) and a desire for a human to blame if there is a mistake. If a doctor makes a misdiagnosis and kills the patient as a result it is less bad than if an AI makes a misdiagnosis and kills the patient as a result. It's the one thing I don't understand about most of society.

Same with self driving cars. Self driving cars could be proven to be 100,000% safer than human drivers - but until it is legally mandated people will prefer humans behind the wheel because "what if the self driving car runs a red light and kills someone?" ignoring the hundreds and thousands of humans who run red lights and kill people.

>why we were not using algorithms to assist with diagnosis already.

On the bright side - we increasingly are! I think it's more an issue with budgeting and legal issues that it isn't as widespread.


Maybe it's not so much that a human isn't to blame in the case of an algorithm classifying something incorrectly, but that an algorithm might make mistakes that a human wouldn't.

I've read a very interesting article [0] about autonomous driving recently, a discussion about the ethics and social obstacles in the way of adopting autonomous cars. The author makes the case that while algorithms might perform a task safer than humans in total but they will inevitably make mistakes. And those mistakes may well be mistakes that are trivial for a human to prevent.

Algorithms making lethal mistakes that a human professional will be extremely unlikely to make, will be the hardest part to swallow if we adopt automation in critical roles - even if the total amount of deaths can be reduced that way.

Even if they are safe across the board, statistically proving to people that autonomous machines perform correctly and aren't prone to obvious mistakes would take decades and billions of hours of operation to be able to make a statement about their safety relative to human performance - since lethal accidents due to human error are relatively rare events. So we are faced with a hen and egg problem in that regard.

[0] http://spectrum.ieee.org/cars-that-think/transportation/self...


I believe you're right about the blame issues. I came here to post about that.

One other wrinkle... Having an algorithm strictly following rules and procedures to the T means that we're taking moral/human decision making out of the mix. This matters in situations where two different diagnoses may have the same treatment, but different outcomes on the patient's livelihood aside from their health. E.g., a human doctor may note that a tumor is just big enough to be considered cancerous, but marks it down as "pre-cancerous" in their official diagnosis. The treatment is still the same, but it keeps the more serious diagnosis off the patient's record, which can help them when dealing with insurance.

One follow-up argument to this may be that humans would have the final say in a given diagnosis, but I bet this wouldn't always be the case, particularly in lower income scenarios.

The final question to me is if the risk of the loss of moral agency outweighs the risk for incorrect diagnosis. I think there's going to be variance in the machine/human accuracy rates, and the need for a deeper understanding of the human condition.


Apples and oranges. We do have driving assistance, fully autonomous vehicles are entirely different. Low hanging fruits are being picked, not wanting a software to call the shots every time is a valid stand imo.


I think aviation is a good analogy.

Most of the automation in a modern airliner, for example, isn't there to make the plane "fly itself" from point A to point B -- it's there to do things machines are good at in order to reduce the workload of the human crew, so they have more time/energy to focus on the things humans are good at.

Flight plans, for example, are still decided on by humans, even if a programmed computer carries out many aspects of the plan once decided upon. Same for en-route deviation from the prior plan in order to either avoid bad weather or reach good weather (where "weather" is kind of a broad term, and includes things like winds which would help/hinder an airliner). Although "autoland" is a feature, it's not something that's used every time and can be moderately complex to set up, since a bunch of factors have to be decided/input by a human. And of course the final decision to land or go around is made by a human.

Which drives home the fact that automation, in aviation, is a partnership between humans and machines, with humans benefiting and being more productive from offloading some work.

A lot of fields should be looking at that as a model.


> automation, in aviation, is a partnership between humans and machines

as a lay person who's visited too many people in hospitals, this sounds like a reasonable description of the modern hospital room as well.

* bp and pulse monitors do a job which doctors and nurses used to do manually

* non invasive oxygen saturation level monitors do a job which could not be done easily in the past

* automated IV drip monitors do a job nurses used to have to do manually

* wrist bands with bar codes and an accompanying scanner coupled to a database serve as a "second opinion" or double check on the medication a patient is being given

etc


And yet there are cases where humans interfering with the automation caused a crash.


There are also cases where humans had to completely disable automation to avoid a crash. The systems are not always redundant enough to detect bad information when just two or three inputs are affected.


Yes, humans refusing to trust or work with the automation can cause a disaster. That's not an argument for going fully automated or fully human, it's an argument for developing better cooperation between the humans and the automation.


And in many cases it has been poor UI design preventing the humans from recognising or understanding what the machine is telling them, particularly when they are under the extreme stress of an undiagnosed emergency situation.


With medical issues, I'd expect there would be a point at which computers are so much more accurate that humans wouldn't be able to tell.

An analogy is instant replay in football. There are some plays that are just so close, that they can't overrule the field judge - and wouldn't be able to even if the field judge made the opposite ruling!

That being said, you're betting on humans to be irrational, and that is always a good bet unfortunately.


For sure. In 15 years or so, we will wonder why we ALLOWED humans to drive vehicles.


Human liberty is the default, so we'd have to opt-out of it, which I will never support. Humans will still do the majority driving in 20 years.


You may not have the choice in future. When was last time you chose primary functions in indiv auto? Billionaires excluded. Custom tertiary features humans may choose, yes...but primary functions no. Ahhh and freedom from mundane/routine and headaches in traffic may TRUMP (haha) freedom to choose (your type of liberty) I theorize. 77M Millennials would also welcome freedom from driving. Plus 75M Baby Boomers would if not welcome the freedom may be forced to give it up...due to health/aging. GenX'rs - at least sample of 1, moi, - would love the ability to do other things while being driven. And I drive the "ultimate driving machine." https://www.linkedin.com/pulse/unsafe-any-speed-humans-gisel... 20 years too long in my predictions, mostly due to convergence of demographic realities and exponential advances in AI self driving.


If you take surgeons, they are trained for one thing. I know a bariatric surgeon who is an expert in medical weight loss but he could not define the word ketogenic. He told me I was foolish for going on a high fat diet (despite the fact that he's overweight).

Also my oncologist didn't know if some vitamins could actually help my cancer and he was skeptical that my diet could effect my IGF1 levels. He couldn't tell me how I got cancer and couldn't tell me any way to prevent it.

It makes me think there's a future potential problem of Establishment AIs vs contrarian AIs? How would it determine what is best with contradictory information?


Both your doctors were correct: high fat diet is unhealthy and vitamins don't help cancer (unless you mean help it grow).


It's almost like our understanding of cancer is incomplete.


Hopefully more companies are doing this.

My research thesis is in this area, hoping for a nice career doing modelling to predict illness. I'd like to have a nice paying job doing something that actually help people instead of just only making somebody more richer.

Also it doesn't matter if the AI gets wrong, the algorithm gives say 80% accuracy and it tells you base on your genetic make up if you should take the surgery route or chemotherapy route.

1. It's to assist the doctor. Also perhaps it can be cheaper to diagnose than a doctor. If say the algorithm says with 80% chance you have cancer, then you should go to your doctor and have it check. If it says no, then don't go. You have another tool to evaluate your health keeping in mind that it's a tool and aid, not a replacement for doctor.

The only concerns are genetic discrimination which GINA law addresses. And medical algorithm usually err on the side of false positive. So it rather get it wrong in saying you have cancer than saying you don't have cancer and in reality you do.

Any body know of any companies that does this please send them my way. ^__^


I am doing this, full-time for almost three years soon. The company is at http://dochuddle.com/

So here is the warning: Medical startups are hard, really hard. - It is difficult to apply research on 32x32 cat icons on 15megapixel xrays. - It is difficult to get data without year-long contract efforts - It is very difficult/expensive, if not impossible, to use cloud resources due to HIPAA and localization rules so you need to build your own on-premise GPU grids like we did - It is difficult doing most medical things in the USA unless you are on the revenue side (e.g., collections, increasing yield, etc.) -- we got so many raw deal partnership offers in the US that we went overseas to trail our product.

The entire medical system in the US is corrupt from the ground up, geared to maximize revenue with minimal lawsuit risk. Patient care rarely enters the conversation internally. I thought financial services was bad (my past career), but at least the metrics were all agreed upon by everyone. In medicine, everyone has an agenda, often diametrically opposed to other parties.


Keep in mind that TuringNYC did not even MENTION the main issue that trips up most startups...

ie - they are unfamiliar with how to secure, (and KEEP), FDA approval for their products. It actually comes as a surprise to many of them that they even NEED FDA Approval. Then they are further shocked by how invasive the process is, and how long it can take. And don't even get me started about the look on their faces when they realize that, despite all of that, you're still held CRIMINALLY accountable for any bugs in your software. (Stay away from anything that might injure anyone if you are unsure of the ability of your process to reliably prevent bugs. Things like RTP are no-go areas for all but the most meticulous medical software developers.)

Don't get into medical software unless you have an extremely long, and very well funded, runway. This is not an industry where 2 guys in a garage can innovate and side-step the regulations. And whatever you do, stay away from anything that could actually harm the patient if it is wrong.


Very good point @bilbo0s and thank you for bringing this up. I didn't mention it as we focused on a non-clinical diagnostic (thus not subject to FDA), decided to entirely avoid the US market (focus on more less risk averse economies open to innovation and aligned via ACO-style systems.)

FDA is definitely the biggest hurdle of them all. I'd shudder if we had to deal with that 600 pound gorilla.


China Creates Special Economic Zone for Medical Tourism: https://news.ycombinator.com/item?id=14022383


This is probably one of the most accurate portrayals of medical systems i countries with highly developed and sophisticated legal system.

It's sad but one of those things that is only apparent once you're neck deep in it.

The people who want to truly help patients are swept aside by those who are in positions to benefit themselves or their friends at the expense of others, including patients. Think administrators, bureaucrats, regulators.

When this thread was raised the other day, I jokingly said how machines could never replace doctors.

The reason is that you can't sue a machine that makes a mistake. Poor medical outcomes like how random flies get hit by cars on a highway.

The powers that be will always insist a doctor be there to sign off. There needs to be a fall guy.


@drchiu, it is uncertain how the system will evolve, but one way to solve the fall-guy problem might be to purchase insurance on these systems and have them bonded. It is all new territory, so no one really knows.

Self Driving Cars will face the same challenge w/r/t liability insurance when there isn't a driver. It might be a insured & bonded set of algos and systems.


I recently talk to somebody from https://www.lumiata.com/

For my thesis I'm trying to design better medical software. Good luck with yours!


if you give a flying (sic) about solving medical issues, and actually know a thing or 7 about curing disease, check out Kaggle's competitions. The cervical cancer competition is of particular interest.

If you happen to disagree, step away from your doubt for a sec and listen: we can, and will cure these diseases with AI. That's the whole point. We imbue our intelligence into a machine and voila, the machine does what we ask it to do with greater expediency and more acumen than an individual can do alone. We don't garden with machines and say" wow, this took fifteen thousand people to build, should we use it so we can do other stuff instead?".

Nope. We say, thank you John Deere, I'll take 2. While we are at it, let's look a little deeper and think about how civilization functions in general. Is that not what we do? We connect, we decide to work together, and next thing you know, we improve our quality of life: otherwise known as a corporation (or conglomerate if you wannanother version, ya heard?).

So, is AI good for medicine, yes: it is.

Here's a free thought to prove my point. Using my intellect today I deduced that anxiety is absurdity masked as truth. Imbue that into some AI, you'll heal the minds of the world, ok?

Peas in the ground, head in the air, thoughts with the 1. -Love


I think it's really useful to see the MD's view of such changes here: https://www.reddit.com/r/medicine/comments/61sgfw/ai_versus_.... Obviously not necessarily representative of how all MDs think but it's interesting to see some fears/concerns they have.


Automating diagnosis goes too far. Deep learning will largely be used for decision support; i.e. surfacing and indicating possible diagnoses for a medical professional to choose from.

Most machine-learning companies have misunderstood the role of radiologists. Their first job is not diagnosis. It is to find the right domain, the right view of the data, from which to formulate a diagnosis. So the problem is to map from one set of parameters, say, how an X-ray is taken, to another more promising set of parameters for the same X-ray, to get the view they need. It's not just see an image, find a tumor.


I have read about khosla paper pretty much covered http://www.khoslaventures.com/20-percent-doctor-included-spe...


Is there any data openly available for developing this kind of system?


Kaggle has a few datasets, including I believe mammographs and proteomes if you want to do breast cancer specifically.

If you're not in it for the money, or if just fame is enough, I'd suggest to go upriver: individual diagnostic data is rare because of medical privacy and market forces, but there is boundless open data in, for example genomics and proteomics. If you search for [bioinformatics competition], you should find a nice selection of opportunities with good data availability and clearly defined objectives. ML is slowly revolutionising this field, although it's a good idea to pay attention to what happened before–there were some seriously smart people working on these problems for a long time, and they found some rather ways to extract the most value from data with the tools available at the time.


I recently found http://www.cancerimagingarchive.net/ which actually has some reasonably large collections. There's a public 1400 person breast cancer dataset, and instructions for how to apply for access to the 26254 person National Lung Screening Trial dataset.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: