> Screening mammography aims to identify breast cancer at earlier stages of the disease, when treatment can be more successful. Despite the existence of screening programmes worldwide, the interpretation of mammograms is affected by high rates of false positives and false negatives. Here we present an artificial intelligence (AI) system that is capable of surpassing human experts in breast cancer prediction. [...]
> In an independent study of six radiologists, the AI system outperformed all of the human readers: the area under the receiver operating characteristic curve (AUC-ROC) for the AI system was greater than the AUC-ROC for the average radiologist by an absolute margin of 11.5%. We ran a simulation in which the AI system participated in the double-reading process that is used in the UK, and found that the AI system maintained non-inferior performance and reduced the workload of the second reader by 88%. This robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening.
So, there you have it: AI not "either/or" humans, but both, in conjunction, as a composition of the best of both worlds.
At the very least, that's how civilization will massively and intimately introduce true assistant AI.
It's also somewhat counter-intuitive to think that the most specialized tasks are the low hanging fruits; i.e. that the "difficult" to us, culminating years of training and experience for humans (e.g. how to read a medical scan) may be, per its natural advantages (like speed and parallelism), "easy" to the machine.
That space (where machine expertise is cheaper than human) roughly maps to the immense value attributed to the rise of industrial-age narrow AI; therein lies not a way to replace humans — we never did that in history, merely destroyed jobs to create ever more — but rather to augment ourselves once more to whole new levels of performance.
Anything more than this is AGI-level, science-fiction so far — and there's not even a shred of evidence that it's theoretically a sure thing, possible in the first place. Which is not to say that AI safety research isn't extremely important even for the narrow kind (manipulation comes to mind), but we shouldn't go as far as to bet future economic growth on its existence. Like fusion or interstellar travel, we just don't know. Yet, and for the foreseeable future, because scale.
Exactly this. This is where I see AI possibly going: To be a complimentary tool or second pair of eyes to speed up the work for the professionals rather than replacing them. I also see this research as a very positive step forward for using AI for good and especially bringing highly accurate results that can used as a aid for health professionals.
However, given that this research used a deep learning (DL) based AI system in the medical industry, there are still questions around this AI system explaining itself and its internal decision process for the sake of transparency, which will almost be ignored in other news reporting sites and will focus only on the accuracy. DL-based AI systems will still be a concern towards both patients and clinicians and I would expect this to be a focus point in the future, despite the welcoming results which is still very interesting anyways.
Other than the transparency issues behind the AI system, I'd say this is a great start into the new decade for AI.
Agreed. The ability of the someone/AI to explain their decision making process, is critical in determining whether such a decision has been adequately thought out or not. If a PhD must go through a viva, surely it is also incumbent on anybody pushing "AI" to also be able to "survive" such a viva. Otherwise, we might as well just go back to the days of reading entrails, flipping coins, etc.
[edit: typo on viva]
Note that the system does produce localization. "In addition to producing a classification decision for the entire case, the AI system was designed to highlight specific areas of suspicion for malignancy."
How many years did centaurs reign supreme over pure AI in chess? 5-10 maybe? This "both" stuff is just a temporary stop on the way to meat obsolescence.
Agreed. At some point, doctors will be a completely redundant step in analyzing these scans. Even before then, the AI will reduce the amount of labor needed and partially commoditize some medical professionals.
The only issue is that humans don't seem to do well at jobs in which another agent is at least plausibly reliable. The Tesla autopilot is an example of that, we tend to disconnect pretty quickly.
Another thing I find interesting is that Google was able to train a neural network on retinas and can reliably distinguish sex based on retinal image alone...something opthamologists basically can't do. So not only are these systems approaching human capability in tasks we can do, they can do things we can't. As medical data becomes more freely flowing (presumably) over the next couple of decades, i think we'll find that 'AI' can become even more reliable.
I think machine's advantage can be summarized as "good at aggregating weak signals". Humans excel at analyzing complex signals, but basically can't use signals weaker than some point. Machines have no trouble with weak signals.
> In an independent study of six radiologists, the AI system outperformed all of the human readers: the area under the receiver operating characteristic curve (AUC-ROC) for the AI system was greater than the AUC-ROC for the average radiologist by an absolute margin of 11.5%. We ran a simulation in which the AI system participated in the double-reading process that is used in the UK, and found that the AI system maintained non-inferior performance and reduced the workload of the second reader by 88%. This robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening.
So, there you have it: AI not "either/or" humans, but both, in conjunction, as a composition of the best of both worlds.
At the very least, that's how civilization will massively and intimately introduce true assistant AI.
It's also somewhat counter-intuitive to think that the most specialized tasks are the low hanging fruits; i.e. that the "difficult" to us, culminating years of training and experience for humans (e.g. how to read a medical scan) may be, per its natural advantages (like speed and parallelism), "easy" to the machine.
That space (where machine expertise is cheaper than human) roughly maps to the immense value attributed to the rise of industrial-age narrow AI; therein lies not a way to replace humans — we never did that in history, merely destroyed jobs to create ever more — but rather to augment ourselves once more to whole new levels of performance.
Anything more than this is AGI-level, science-fiction so far — and there's not even a shred of evidence that it's theoretically a sure thing, possible in the first place. Which is not to say that AI safety research isn't extremely important even for the narrow kind (manipulation comes to mind), but we shouldn't go as far as to bet future economic growth on its existence. Like fusion or interstellar travel, we just don't know. Yet, and for the foreseeable future, because scale.