I think the contribution is better identification of risk factors than were previously known?
The most important variables influencing KECAN included 4 known risk
factors(age, race, sex, BMI) and 9 novel (COPD, greater Hct,lower HDL,
greater LDL, lower serum CO2, lower Na, lower BUN, lower ALT, and
greater WBC).
The AI part seems like a buzzword-y add-on?
"We collected prescriptions, laboratory results, and International Classification of Diseases diagnoses 1 to 5 years prior to index. We randomly divided the cohort into training (50%), preliminary validation (25%), and testing (25%). In the preliminary validation set, simple random sampling imputation and extreme gradient boosting machine learning were most accurate. In the test set, we compared the final model, the Kettles Esophageal and Cardia Adenocarcinoma predictioN (K-ECAN) Tool, to HUNT, Kunzmann, and published guidelines."
I'm 100% not knowledgeable enough to parse that out, but I think maybe they ran XGBoost and did some hyperparameter tuning?
It's deep in the 'just statistics' territory of ML/AI by the sounds of it, yeah. This is just a classification problem (cancer, not cancer) with a bunch of available patient data spanning different dimensions, four of them known risk factors, and regression analysis to find others (well, re-discovering/confirming those four too) that correlate.
I imagine at least the title here is a groan for the paper authors.
I wish that reputable reporting on this kind of topic would start putting accuracy next to what it can reportedly do. I remember visiting a machine learning poster session where many of the posters reported results with accuracy as low as 30%.
If a program is able to predict this once or twice it's not a miracle. If it's able to do so with 60% I'd raise some eyebrows. But I'd say it's only a turning point when it's able to beat false positive rates of human doctors. Without an accuracy score, this news is absolutely meaningless.
Maybe I’m misunderstanding, but if you could have a test that doesn’t say anything if it isn’t sure, but if it is sure with very high confidence gives you a true positive, that has a lot of value.
This probably isn't surprising - there's strong evidence that specific diets (salted foods, alcohol etc) and being obese is highly associated with stomache cancers.
Presumably this has learned the connection between non cancer related stomach issues in medical notes that people with those diets and genetics get years before the cancer:
Let's imagine stomach cancer was caused by the 'mega burger' sold by walmart. 6 months after eating said mega burger, you are 100% guaranteed a diagnosis of stomach cancer.
If that were the case, AI would really struggle to predict this 3 years ahead. The AI has to make a decent prediction of who will shop at walmart, who will buy the mega burger, and when they will eat it.
So, the fact AI can make a decent prediction 3 years out suggests that if there is a 'trigger event' that causes some/all stomach cancers, that the trigger event is either very predictable, or happens more than 3 years before diagnosis.
My dad had esophageal cancer and until he was diagnosed with stage 4 with multiple metastases he didn't really have symptoms. He'd had a lower appetite and lost weight in the months leading up to his diagnosis, but thought that was just lifestyle and diet changes finally clicking. Esophageal cancer with multiple metastases is usually fatal at about four months after diagnosis (my dad died at 3.5 months after diagnosis).
But you don't go from no cancer to dead in 4 months, the cancer is there for years. This research is finding signals from blood tests that correlate to cancer long before it has visible effects. The trigger event you're theorizing is just cancer at a lower level
Most likely the last one. As I understand it, our diagnosis techniques for cancer are very limited and basically only at the “omg, that’s obviously cancer” phase of progression (ie it has to be noticeable so that we can do a biopsy). Even our imaging techniques are limited because surgeries can discover that the cancer is worse than what was originally thought. In some cases we can catch the cancer very early, but even in those cases it’s a mixture of luck and we don’t actually know how long the cancer was actually growing for (“early” just means early enough for treatment to be very effective).
At least that’s my external highly amateur understanding of the situation.
Wouldn’t it be closer to eating the mega burger regularly, like every week? In which case if you have that habit, it could easily be predicted that you would continue.
But the actual model seems simpler, measuring something like the obesity from your mega burger habit.
While paper is paywalled, it doesn’t sound that sophisticated.
The blurb mentions the prediction requires a couple of measurements that aren’t usually taken like stomach and waist circumference.
From my experience with building models (in other domains), the key to break through is very often new data that previously wasn’t considered as it wasn’t easily available.
I don’t know if I have GERD. But I some nights wake up with intense pain in my lower chest and upper stomach and if I sit down it goes away after some time. I learned to avoid certain triggers for this - black peppers, lemons etc. I also have the head end of my bed lifted up a bit which seems to help. My previous endoscopy a few years back didn’t reveal much except my doctor just giving me Pento.
This study doesn’t seem to offer much for someone who might be at risk. All I hear is some statistical jargons.
> In the United States and other western countries, a form of esophageal and stomach cancer has risen dramatically over the last five decades. Rates of esophageal adenocarcinoma, or EAC, and gastric cardia adenocarcinoma, or GCA, are both highly fatal.
According to this article:
McColl, K.E.L. What is causing the rising incidence of esophageal adenocarcinoma in the West and will it also happen in the East?. J Gastroenterol 54, 669–673 (2019).
part of the reason for the rising cancer is obesity. So what this AI tool does is make it safer to be obese, thereby causing long-term suffering for more of the population. And part of the reason why people are obese is because of technology doing so many things so efficiently for them.
How about instead of developing more technology, we address the root problems instead? So far, all these medical solutions I have seen are bandaids that simply solve some of the problems of technology by creating more technology.
However, the end result is a world where everything is so efficient that we'll simply have to do nothing and grow fat and purposeless...
> So what this AI tool does is make it safer to be obese, thereby causing long-term suffering for more of the population. And part of the reason why people are obese is because of technology doing so many things so efficiently for them.
What? How does that make sense? It's easier to be obese because now you won't die from cancer, therefor people will stay obese? I guess that's technically true but like... the alternative is they get cancer.
If the assumption is "well, they'll get cancer and then they'll have a wake up call and lose weight, assuming they survive" I question how that's better than "they'll get a cancer diagnosis, very likely survive, and still understand the severity of their issue".
I think you're also assuming that overweight people aren't aware of the problem. Anyone who's going to the dr and is overweight is going to be told flat out that they need to lose the weight to improve their health. Yes, it would be great to have some way to just not be overweight but I don't see how this research is making things worse.
It is simply because you are not looking far enough. Yes, it is easier to get obese now. Imagine a world centuries ago, in the hunter-gatherer times. It was very hard to get obese then because it was a serious risk. Over time, technology has made it safer to get out of shape and obese.
The alternative is they get cancer? Well, now more people might stay obese, perhaps just a small fraction, or people will be less motivated. Maybe it will be hard to calculate based on this one invention, but 100 inventions like this means more and more people will stop caring. Imagine a world where medicine has solved 100% of problems. Then you could just get obese very easily without much consequence.
I am not assuming people who are overweight AREN'T aware. Of course they are, and some have a lot of difficulty losing weight. But some do, and now the incentive to do so is imperceptibly lessened.
It just seems like this allows earlier detection of the possibility of getting cancer, which surely means more opportunity to tell the patient to lose weight. By the time you've got stomach cancer, it's a little late.
(Obesity still seems to be the major factor in the new test -- the old one required hip and waist measurements, and the new one does not, which says to me that they've found a bunch of correlates to obesity which don't require taking new measurements.)
Semaglutide and similar glucagon-like peptide-1 (GLP-1) receptor agonists are a mix of the business plan and a shim for humanity optimizing for ag infra structured to pump out garbage nutrition (subsidized carbs and corn syrup, factory farming of beef, etc).
They Mayo clinic says obesity (plus inactivity) is either the cause or closely linked. So while we can't say for sure, it seems pretty obvious that people who stop eating over 2k calories a day and get 30+ minutes of high quality exercise, will almost all be "cured".
My guess is in cases like this it comes from collective real-world past experiences (in either AI or biomedical research or both) of what's going on behind the scenes, leading to skepticism about claims.
For what it's worth it doesn't seem depressing to me, it's more like holding it up to a standard that guarantees real-world utility. It's also depressing to see misleading claims being used to further academic careers at the expense of patients and investors.
I think you're right to question whether skepticism goes too far, but I don't necessarily see skepticism as a negative thing. If it holds up to places like HN skeptic corners it probably will hold up everywhere.
I think the contribution is better identification of risk factors than were previously known?
The AI part seems like a buzzword-y add-on?"We collected prescriptions, laboratory results, and International Classification of Diseases diagnoses 1 to 5 years prior to index. We randomly divided the cohort into training (50%), preliminary validation (25%), and testing (25%). In the preliminary validation set, simple random sampling imputation and extreme gradient boosting machine learning were most accurate. In the test set, we compared the final model, the Kettles Esophageal and Cardia Adenocarcinoma predictioN (K-ECAN) Tool, to HUNT, Kunzmann, and published guidelines."
I'm 100% not knowledgeable enough to parse that out, but I think maybe they ran XGBoost and did some hyperparameter tuning?