Hacker News new | past | comments | ask | show | jobs | submit login
A woman who can smell Parkinson's disease (bbc.co.uk)
251 points by dan1234 on Oct 22, 2015 | hide | past | favorite | 114 comments



This reminds me of when I was in the Navy -- at one point I had lived with hundreds of other guys in a ship's berthing and at another duty station I mentioned to a coworker that I could determine someone's race from their sweat. Of course, they called bullshit on that statement (as I'd imagine most on here would) and we tested it out with 4 guys (1 black[1], 2 white[2], 1 hispanic[3]) who sweated into the same size shirts, and I was able to identify them with 100% accuracy. Another time, I walked into a cubicle, wrinkled my nose, and said "Man! It smells like an old deck of cards in here!". One of the guys standing next to me then pulled out a rather well-played-through deck of cards out of his pocket.

One of the reasons people give for the difference in smell are the difference in foods that different cultures eat. I definitely know this is not the case, as everyone on that ship was eating the exact same food, and their smells were extremely distinct.

I understand that lack of science applied to my specific anecdotes, but I think there's something to be said for having a keen sense of smell, since people are already geared towards smelling other people's sweat to determine immunocompatibility[4].

[1] A black person's sweat is the one I can identify with absolute certainty, as it's completely unmistakable for anything else

[2] A white person's sweat smells like a distinct type of onion to me

[3] I couldn't really identify his race if it were just this shirt, but had that one by process of elimination.

[4] https://en.wikipedia.org/wiki/Major_histocompatibility_compl...


" A white person's sweat smells like a distinct type of onion to me..." I can relate. I'm black and african and its a common thing here to describe caucasians as smelling of onions.


Different races definitely smell different; it shouldn't be surprise that a sweaty shirt contains that smell.


I hear anecdotally that Japanese think Americans smell like butter.


It is frowned on now, but in former decades some uncouth Japanese would say "Cheezu nioi" (smells like cheese) when encountering a foreigner.


Though I think that's mostly diet related - Japanese don't really do dairy.


The thing that most surprised me were the odds that a person in the control group (no parkinson's), actually did have parkinson's (diagnosed early according to the woman) and was diagnosed later to indeed have parkinson's.

After all, the prevalence is 0.3% in the general population. The odds at least one of the six in the control group had parkinson's is around 2% (even less considering it's already a filtered audience in a way). Not impossible but very unlikely, which incidentally makes her insistence of that single particular person to have PD all the more interesting.

There's definitely something there, looking forward to more testing.


Well, calculating the odds for this sort of occurrence would be tricky. You aren't looking for the odds that a random person has parkinsons (which, as you said, would be 0.3%), but instead you are looking for the odds of a person having UNDIAGNOSED parkinsons (since this was the control group of people who were supposed to NOT have parkinsons). Finding out the percentage of people who fall into that category is probably trickier... what is the odds of a person who hasn't yet been diagnosed with parkinsons being diagnosed at some point in their lifetime? Within the next 6 months?


I don't understand the phrasing there, "She was adamant".

If it was a controlled test, why should she need to be "adamant" about anything, surely she would have merely identified it.


I guess after the test they told her the results (i.e., you're wrong on 1 person because he's in the control group and doesn't have Parkinson's), at which point she insisted that he smelled the same as the others, and then the journalist wrote that up as 'she was adamant he had Parkinson's'. Or she really did fully believe in her sense of smell as an ability to detect Parkinson's without doubt, at which point it as much sense for her to say a person has Parkinson's as you to say the sky is really blue because your eyes tell you and the journalist simply relayed that. There's various readings of the phrase that can make sense.


She needs to be "adamant" because that person was in the control group". He was supposed to not have it but she was certain he did and he got tested for it.

I have a red coin and a green coin. I hide the two coins behind my back and ask you to pick the red coin. You pick my left hand, revealing a red coin. You then tell me to reveal my right hand because it also has a red coin. Even though I told you I only have a red coin and a green coin.

I don't believe you. Why should I? I selected a red coin and a green coin and you had already guessed which hand held the red coin. You stand firm. You're absolutely certain my other hand contains a red coin. I open my right hand and reveal a red coin.

You wouldn't need to be "adamant" if you told me that my right hand held a green coin. I know it was a green coin - I placed it there!

She had to be "adamant" because everyone thought she was surely wrong. After all - that was one of the control members. They believed she was wrong. Turns out she was right.


Maybe the researchers informed her of what they thought the correct results were afterwards and she insisted on her interpretation.


She was adamant that one of the control group members (i.e. no Parkinson's) had Parkinson's.


"Her accuracy was 11 out of 12. We were quite impressed."

Dr Kunath adds: "She got the six Parkinson's but then she was adamant one of the 'control' subjects had Parkinson's.

"But he was in our control group so he didn't have Parkinson's.

"According to him and according to us as well he didn't have Parkinson's.

"But eight months later he informed me that he had been diagnosed with Parkinson's.

"So Joy wasn't correct for 11 out of 12, she was actually 12 out of 12 correct at that time.

"That really impressed us and we had to dig further into this phenomenon."

Reminds me of the story I heard about a doctor who was diagnosing an STD a lot earlier than average. They put two other doctors in the room with him to try to spot what he was seeing and identified the eye flutter as a new symptom, I think for syphilis.


Reminds me of this anecdote from Fishing with John:

"I heard a story about this woman that was swimming in the ocean, and dolphins started swimming with her, and the dolphins kept poking her in the chest above her breast. She got scared and they took her out of the water, and she had a big bruise right on the top of her breast. They took her to the doctor to examine her, and they did a mammogram, and found that she had cancer right in that spot."

– Jim Jarmusch, 'Fishing with John'

https://youtu.be/uVa8rj1mm7A?t=912


Reminds me of this anecdote:

In 1986, Peter Davies was on holiday in Kenya after graduating from Louisiana State University. On a hike through the bush, he came across a young bull elephant standing with one leg raised in the air. The elephant seemed distressed, so Peter approached it very carefully. He got down on one knee, inspected the elephants foot, and found a large piece of wood deeply embedded in it. As carefully and as gently as he could, Peter worked the wood out with his knife, after which the elephant gingerly put down its foot.

The elephant turned to face the man and with a rather curious look on its face, stared at him for several tense moments. Peter stood frozen, thinking of nothing else but being trampled. Eventually the elephant trumpeted loudly, turned and walked away. Peter never forgot that elephant or the events of that day.

Twenty years later, Peter was walking through the Chicago Zoo with his teenaged son. As they approached the elephant enclosure, one of the creatures turned and walked over to near where Peter and his son Cameron were standing. The large bull elephant stared at Peter, lifted its front foot off the ground, then put it down. The elephant did that several times then trumpeted loudly, all the while staring at the man.

Remembering the encounter in 1986, Peter could not help wondering if this was the same elephant. Peter summoned up his courage, climbed over the railing and made his way into the enclosure. He walked right up to the elephant and stared back in wonder. The elephant trumpeted again, wrapped its trunk around one of Peter legs and slammed him against the railing, killing him instantly.

Probably wasn't the same fucking elephant.

I absolutely loved "Fishing with John" - that doesn't make the cancer-detecting dolphin a true story, though.


Not sure if it is scientifically confirmed but it is believed that dolphins can see inside a body to some extent with their sonar abilities.


Wow, link for that?


I can't vouch for dolphin sonar resolving fine anatomical detail /in vivo/, but dolphins can recognize visually obscured objects using the ability [0].

[0] http://www.dolphin-institute.org/our_research/dolphin_resear...



Yes, and as the wise narrator of this parody series points out, "What is most remarkable about this story is not that the dolphin knew the woman was sick, but that it knew she didn't know already."


There are a couple people picking on this research in the thread. This is a preliminary study, just a "bullshit test" to see if this woman is crazy or if she's not. It's obvious that she's not crazy, because all she's doing is sniffing t-shirts and she doesn't know who they belong to. The sample size is large enough, we have a p-value of no more than p=0.0012, and that's under the cynical assumption that she somehow knew that exactly seven of the twelve samples would have Parkinson's... something that the scientists didn't know.

So we know that something is going on here. The next steps are to control potential confounding variables and to determine the sensitivity and specificity of the test, if it's shown that she's not actually detecting some confounding variable. This is where you go double-blind, you use larger sample sizes, et cetera, now that you have money. You have money because the preliminary research was promising.

And we're not just getting a good test out of this. If there's an actual chemical that she's smelling, then there's some chemical process that's going on in people with Parkinson's which isn't happening in people without Parkinson's (or vice versa). Tracing these chemical pathways could give us clues to the etiology of the disease, which would be a REALLY BIG DEAL. Or maybe it's just a rabbit hole.


One of the great things about science is that we can determine whether a phenomenon is likely real without having to have the slightest clue what the mechanism might be. Evidence comes first, and this looks like solid evidence, and mechanism can come later; the evidence alone proves the existence of something interesting, and that's what's worth a follow-up.

Hypothesis generation is important, because it helps us design the next experiment, but this experiment is already very interesting.


> One of the great things about science is that we can determine whether a phenomenon is likely real without having to have the slightest clue what the mechanism might be.

You just have to ignore the people who shout "correlation is not causation" at every opportunity, appropriate or not.


Because "correlation is not causation" is just plain wrong. What is should be is "correlation does not imply causation". This is where science comes in. To answer the question, is this correlation because of causation?

The statement is only important to people doing statistical analysis not experimental science.


I think XKCD (like usual) sums it up the best, this time in the alt-text of [0]:

"Correlation doesn't imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing 'look over there'."

[0] - https://xkcd.com/552/


>correlation does not imply causation

It sure does. It might not prove causation, or it might not necessitate causation, but it very much implies it.

Somehow people forget that "imply" means: "indicate the truth or existence of (something) by suggestion rather than explicit reference".

In this -- the dictionary and everyday sense -- correlation DOES imply (suggest) causation. It just doesn't secure it.


https://en.wikipedia.org/wiki/Material_conditional aka implication or implies

Definitions differ depending on the context, both definitions are valid.


99.26% correlation is observed between the Divorce Rate in Maine, and the Per Capita Consumption of Margarine

http://www.tylervigen.com/spurious-correlations

You say this implies causation... I have my doubts in any sense of the word implies.

For sure, a correlation could lead to something to investigate, but look at enough data and you will find plenty of correlations that mean nothing. A lot depends on how the correlation is discovered (number of variables involved etc.).


>You say this implies causation... I have my doubts in any sense of the word implies.

Couples divorcing people their partner got fat on margarine?

Besides that's not the best way to check correlation charts. You first have to remove bias components influencing both curves, e.g. the mere act that both are rising over time.

When you do that, do they still match each other, e.g. following increases and decreases? I very much doubt so. So this plot doesn't actually show correlation -- just that both "increase" over time in a similar way.

The same kind of "same plot trends" happens or every set of things that e.g. both have an exponential growth curve -- but it's not correlation unless both change consistently as the other changes.


That can't be right. There are so many spurious correlations that obviously imply nothing.

There's got to be some other required factor before correlation can imply causation. Like "if there's reason to believe something is relevant, and there is correlation, then that implies causation. "


But a "reason to believe something is relevant" is a prejudged correlation.


imply, v. ... (transitive, of a person or proposition) to hint; to insinuate; to suggest tacitly and avoid a direct statement


>correlation does not imply causation

Perhaps we should say, “Correlation correlates with causation.”

(Because causation is a subset of correlation.)


Isn't the full version something along the lines of: "correlation does not imply causation, but merely hints at it"?


There should be a name for those people who are quick to shout out some truism (that is actually true) but used out of context to a cliched degree.

See also: "You're not Google's customer, you're the product"


See also also: argument from fallacy, aka the "fallacy fallacy": claiming that a conclusion is wrong because an argument supporting it is invalid.


Not disagreeing, but there is a fine-line between an argument being wrong and that the conclusion being wrong. Are you really right when you use an incorrect argument/process/information to arrive to the right conclusion? There's a whole epistemological debate to be had about the intersection of belief, knowledge and truth.


I know the odor she is talking about. My husband was diagnosed in January. His previous truck and his current vehicle have developed a very distinct odor. I used to say old musty Portuguese man but I realize it is the smell she is talking about. My husband has only used deodorant for 27+ years. This came about because anti-perspirant made his entire armpit swell with bright red welts. He also does not sweat much. It is a very sad disease, but will continue to search for ways to make it easier.


Interesting given the recent link between Parkinsons/Alzheimers and fungal infections: https://news.ycombinator.com/item?id=10401344


I would not call that a definitive "recent link"; as many in that thread pointed out, that was not the greatest research and needs far more work to confirm


Yeah, more research needed. I also came across another 2013 "Fungus may cause symptoms of Parkinson's disease" article I found interesting.

They "found that a compound emitted by mold, called 1-octen-3-ol but more commonly known as mushroom alcohol" "attacked two genes involved in the creation of dopamine"

And speculate I think that could be related to the drop of in dopamine production in Parkinsons.

It would be interesting to try some such "volatile organic compounds emitted by fungi" with the lady who can smell Parkinson's to see it the smell was similar.

http://www.medicalnewstoday.com/articles/268848.php

Also a 1990 article suggesting it's "the fungus, called Nocardia asteroides." I guess nothing much happening since 1990 suggests that didn't work out.

http://articles.latimes.com/1990-05-17/news/mn-286_1_nocardi...


Some more truly fascinating Parkinson's news this week about nilotinib, apparently seeming to cure Parkinson in small trials... http://www.independent.co.uk/life-style/health-and-families/...


Thanks for the link. There is another article in the New Scientist with more information https://www.newscientist.com/article/dn28357-people-with-par...

I'm on the look out for stuff for my dad who seems to have it. The data in the New Scientist article looks promising - all 12 of the patients started to improve, some dramatically and as to side effects the "team saw no unwanted effects".


Can anybody tell us what is the current state of the art in "electronic nose" sensors technology? Could they potentially be used to identify this smell?


We're pretty good at detecting low concentrations of chemicals in the air if we know exactly which ones to detect. This case suggests there might be some previously unnoticed substances in the air that are indicative of one's Parkinson. If so, we'll probably quickly build dedicated sensors.

The thing that makes me wonder though - what's the state of broad-range chemical sensing? Could a chemical like this be found before if we kept taking broad "smell" samples of everyone and cross-correlating them?


My grandpa died of parkinson's, its a really slow and heartbreaking disorder. I think early diagnosis would be wonderful. It runs in the family, and his two older brothers and father had it, so he lived a very preventative lifestyle - especially with respect to daily exercise. I think it paid off, he lived to 85 while both his brothers died 20 years younger.


Galen relied on sense of smell for diagnosing illnesses. Aristotle wrote about it. Until the late 20th century doctors often used smell for diagnostics.


It would seem there are some major confounding factors here not detailed in the article.

1) The population doesn't reflect a realistic test--if the overall incidence is 0.3%, but the sample size had 50% (or more given her adamant hit), then we need to know whether she was expecting more. 2) More importantly, the incidence in men is 1.49x that of women [1], and age also plays a factor. So given that the sample is already skewed towards a higher incidence of positives, the gender differences might be factored into her senses--especially since it was her husband who was her training set. With n=12, it would be very easy for the probabilities/priors to be much different than truly random. (E.g., the learning function of her nose might be "men + people over 65" which happened to match up with the test and control group quite well.") Or it could tune into medication used to treat the disease.

Great if true, but I am skeptical.

[1] http://jnnp.bmj.com/content/75/4/637.full


If you look at the Bayesian analyses and the frequentist analyses assuming she knew how many AD shirts there would be, you'd see that point 1 doesn't matter. We can statistically test this without the need to make her sniff 1000's of shirts.

Point 2 is interesting though, and the first thoughtful criticism I've seen in the thread. What if she's both a bit lucky, and also picking up on some correlated marker like age/gender?


The question they did not ask what makes the body smell. When we eat garlic we smell like garlic. So the smell must be related to how the food is digested. The researchers should look at the gut flora, in other words they should do a study similar to this https://news.ycombinator.com/item?id=10439129 All disease start in the gut.


Seems like they are going the long way about it to determine a molecule for a test.

Why not just train a dog to smell it?


Smell what?

It seems that _how_ this works is still unclear, if it works at all. Once validated, it seems plausible that a device/trained animal could replicate the results, assuming its not like this guy: https://en.wikipedia.org/wiki/James_Harrison_(blood_donor)


Well, since it seems likely there's now a scent-based marker, we could train dogs to sniff Alzheimer's out, but we'd still want to know more about the molecule and biochemistry for treatment purposes.


Looking forward to hearing more--reminds me of work done to detect cancer by scent with help from dogs:

https://en.wikipedia.org/wiki/Canine_cancer_detection


This also reminds me of the breathe test being developed to check ammonia levels for certain kinds of patients:

https://news.ycombinator.com/item?id=10402816


I'm too lazy to do the math, but I'm just going to say a test like this needs more than a sample size of 12 to show significance for detecting something that happens 1 in 500. The woman also was predisposed to thinking that at least some of the 12 had Parkinson's and the sample selection ensured that at least half did have Parkinson's. An actual test would need to allow any possible sample including those that had zero Parkinson's patients.


You're right... you are too lazy.

Traditional null hypothesis would be something like "she guesses right 50% of the time", which gives a likelihood of 1/16384 that she would get the correct answer. Let's be cynical, and suppose that she knew or guessed that there were 5-7 patients with Parkinson's, the likelihood is now 1/2538, still pretty low. Even if she knew there were exactly 7 patients with Parkinson's (quite a cynical null hypothesis!) the likelihood is only 1/792. Hey, that's a p-value of 0.0012!

Yes, p-values suck. But, the significance is absolutely there, but we would want to follow this research up with a larger sample size and control more of the variables.


I do love how counterintuitive sample sizes are. I swear I see the "it's intuitively obvious that 45 people is a comically small sample size, the researcher is an idiot and this is useless!" comments all the time. Then somebody does the math and it turns out p ≈ 0


Yeah... and then there's the people who benchmark computer programs, don't even bother measuring the test variance, don't know about warming the cache, and post raw timing data online claiming that X is better than Y. Dunning Kruger is a harsh mistress.


Let's apply Bayesian reasoning. :P

P(skill): Let's choose a prior that someone can smell parkinson's as one in a million, or 10^-6. (Not very well argued, I admit.)

P(data): This exact data's random occurrence probability is 1/2538.

P(data|skill): The probability of the result, taking account that she has the skill, is 1 (this assumes she never errs).

So we get

    P(skill|data) = P(data|skill) x P(skill) / P(data)

    P(skill) = 1 x 10^-6 / (1/2538) = 0.002538
Or 0.25 percent probability, based on this test, that she has the skill. Which is low.

Intuitively, I would have expected the calculation to yield a much higher number. The prior was very low though.

So I think the grandparent post has some merit. It can be argued that the claim is so extraordinary (the prior) that even twelve "coin tosses" guessed right in a row is more likely.


I think your probabilities are way off. We know that some people can smell some disease factors already (fetor hepaticus). Additionally, there's a LOT of genetic smell differences -- cilantro, cucumber, asparagus, cyanide, etc. Knowing these facts drastically affects both the P(smell) and P(skill) (which you've combined into P(skill)).

Additionally, the likelihood that a particular person would be so convinced in their ability to smell Parkinson's that they sufficiently convince a few researchers AND THEN predict with perfect accuracy is much smaller than 1/4096. This isn't a random person picked off the street--this is a person specifically claiming to be able to accomplish a feat and then being successful on the first try at 1/4096 odds (1/2^12).

Combined, I find this result very interesting and strongly believe the validity of this research. Granted, just because something is likely doesn't necessarily make it true, but it's certainly a strong impetus for future research, or even for people with a similar talent to come forward.


There's a missing variable here: she claimed that she can tell the difference.

We are interested in the following calculation:

    P(skill|claim&data) = P(claim&data|skill) x P(skill) / P(claim&data)
The upper limit of P(claim&data) is P(claim). If P(claim) were above 10^-4, I think the researchers would have met somebody else with the claim, so 10^-4 is a reasonable pessimistic value.

So, Bayesian reasoning won't save you here.


You're exactly right, if the researchers picked a random person off the street and made them smell shirts. But they didn't.


How many people claim they can smell Parkinson's? How many otherwise truthful, sane people have falsely made this claim? Perhaps a better approach to a prior would take into consideration that she claims that she can smell Parkinson's and her reputation as a truthful, sane person. My prior wouldn't be less that 0.1.


That's a good point. On the other hand, if it was common for people (one in a thousand) to smell Parkinson's, somebody would have come forward 200 years ago? A doctor or nurse, perhaps. So, on population level it should be extremely rare.

Of course, she is not a person picked at random, we should look at the population of people claiming to have done these sort of things already on their own. This population would be expected to contain a large portion of people with mental issues and charlatans trying to gain some financial benefit. If she doesn't have a history of either, then the prior jumps up to a very high level already. It is not very likely for a normal person to claim this sort of thing, unless they already have good evidence by themselves already.


So, with no scientific basis at all you believe there is a 10% chance that people can detect disease by smell, something that has AFAIK never been proven for any disease? We don't go to the doctor and have him or her smell our armpits. Do you use a psychic? What we have here is an organization that is basing their research funding on math tricks that I, ten years out of undergrad, maybe didn't remember exactly as it was characterized but was able to see through.


There are so many things wrong in this comment it's hard to figure out where to begin.

"We don't go to the doctor and have him or her smell our armpits" is an argument to authority. Just because a doctor doesn't use test X does not mean that test X is not useful. Every single diagnosis test we use today was, at some point in the past, unknown and unused by doctors. It is scientific research which gave us those tests. And, because you seem to be uninformed about the subject, I'd like to tell you that there are a number of things that a doctor will smell when they diagnose you. Famously, you can diagnose phenylketonuria by smell, and you can also diagnose diabetes by smell.

This comment also seems to reflect a fundamental misunderstanding of the scientific process. The whole point of scientific research--which requires funding, usually--is to figure out if a hypothesis is true or false. If you already know whether your hypothesis is true or false, you're not doing research, you're replicating results.

When you do preliminary research, it's because you don't have very good information about some particular subject. You're complaining about the shaky ground that they base their research funding on--but these scientists did the right thing. Because the hypothesis seemed improbable, they conducted a dirt cheap experiment. It's an experiment that you could have conducted yourself for $20.


> If you already know whether your hypothesis is true or false, you're not doing research

That's exactly what this is. People believe the claim 100% so we do a simple coin flip, and, yes, there it is. It's confirmed! No extraordinary evidence required for this extraordinary claim. Let the research money flow and the BBC reporting commence. If it were my money I would have another lab repeat the experiment.


The prior in this case isn't for people in general, it is for this person.


You may want to revisit your stats, since there's no "math tricks" here. Even if you assume she knew that some sizable portion of the shirts came from AD patients, the p-val is quite low. If you assume every shirt was independent, so there's no knowledge to gained by knowing how many other shirts she classified as AD, the p-val is ludicrously low. And if you go Bayesian, no matter how you slice the prior probabilities, it's still low.


First, what p-value are you referring to? You aren't even stating what your test is. Please explain how you got to "p-value is quite low"


Look at the various analyses in the threads. Mine was a frequentist analysis based on independent categorizations, which has a p-val of ~.0002. Others have posted more sophisticated frequentist and Bayesian analyses based on priors and the subject having advance knowledge of the number of ADs present.

But no matter what assumptions were made, no p-val was greater than .001, which is quite low for n=12 with a single test. Our generally accepted threshold is p<.05. She literally had a perfect score.

Also, saying "An actual test would need to allow any possible sample including those that had zero Parkinson's patients" indicates you don't understand experimental design. Splitting the data into equal groups maximizes your chances of detecting something when effect sizes are small, since sensitivity is related to minimum group size. (P-values are hurt more by low sample sizes than they gain by large ones, which is why a 3/9 split is less powerful than a 6/6 split.)


Not only did she have a perfect score, but she "adamantly" corrected a mistake in the control group. That must have some impact on the Bayesian estimate too. Her correction is worth something, and her confidence in providing that correction is worth something.


Well, to be conservative, assume she was told or guessed that 7 out of the 12 samples were from patients with Parkinson's, which more than accounts for the "predisposed to thinking at least some" factor - actually it would make more sense to guess six, but that can only make the correct response less likely. Twelve choose seven is 792, so out of the 792 possible responses, she chose the single exactly correct one. No fancy statistics needed - the p-value (defined, according to Wikipedia, as the probability of obtaining an equal or more extreme result, the latter of which doesn't apply as the result is already the most extreme possible for the experiment) is simply 1/792 or about 0.001 (0.1%).


That's ridiculous. That's like saying someone who wins the lottery is clairvoyant with P=0.00. I mean they got the exact right numbers which had 1 in 147 million chance, right?


A lot of people play the lottery. Only one person did this experiment. When you test multiple hypothesis (as in your lottery example), you need to perform a correction[2,3].

[1] https://xkcd.com/882/

[2] https://en.wikipedia.org/wiki/Multiple_comparisons_problem

[3] https://en.wikipedia.org/wiki/Bonferroni_correction


That's totally different. They didn't test a whole bunch of people a whole bunch of times. That's what the lottery is. That's different math.


How do I know that? I just assume the guy that won was the only guy that played. If we can forget about all the people who ever claimed to smell disease, then we can forget about all the other lotto players.


What I find amusing here is the only intellectually honest way you have out of the logic hole you've dug is to claim that you understand neither the lottery, nor the article, which then makes your opinion on either uninformed and not terribly compelling.

Were you just advocating for the devil?


I believe there are some treatments out there aimed at slowing down the progress of the disease. Diagnosing early might mean slowing down the nasty stuff earlier, thus improving the life of the patient. Also, this sounds like something that is not at all part of the known symptoms of disease. Imagine if the change of smell is linked to something that is a cause for parkinson (I'm not suggesting the article even considers that option, but it's not impossible), and imagine that that something is actually "easy" to treat for. That would change a lot of things ! Yes, it's very unlikely, but even a small increase in the knowledge of a disease is always progress.


Here is a table relating the sample size, sensitivity, and confidence interval for evaluating diagnostic medical tests http://www.nature.com/nrmicro/journal/v8/n12_supp/fig_tab/nr...

As you can see, a sensitivity of 95% (true positive 95% of the time) with a confidence interval +- 4.3% requires 100 positive subjects. Because the natural rate is 1 in 500, we would need 500*100 = 50,000 total subjects. So you can see how absolutely ludicrous it is to say a woman sniffs the clothes of 12 subjects and is presumed to have a 100% true positive rate with 100% confidence level.


You clearly need to take stats again, because you don't understand what you're citing. A 95% confidence interval is not the same thing as a hypothesis test of p<.05. A confidence interval is for estimating the uncertainty that your chosen margin of error will include the true population parameter.

Nobody's claiming she's always 100% accurate. That's also a factor of low sample sizes. But I went ahead and computed the margin of error for you. For n=12, a 95% confidence interval requires a margin of error of 28%, so her true detection ability is, at worst, 72%, which is still higher than anything else we've got.


The odds of randomly guessing all 12 correctly is 1 in 2^12, or 1 in 4096.


This assumes that people 50% of people have Parkinson's and that the people are chosen randomly from the population. Unfortunately neither of those things are true.


The odds of randomly drawing a royal flush are 1 in 649,000 but it happens all the time.


That's because there are hundreds of thousands or millions of draws occurring. You need to account for what's called the multiple testing problem. [1] Some ways to do this are to control the family-wise error rate (FWER) or the false discovery rate (FDR). You can control the FWER with the Bonferroni correction or the Sidak correction and you can control the FDR with the Benjamini-Hochberg procedure. You are completely correct about things like this happening by chance, and this is the cause of publication bias in science (since there is a predisposition for publishing positive results).

1. https://en.wikipedia.org/wiki/Multiple_comparisons_problem


No, it doesn't happen all the time. It happens, on average, 1 time in 649,000. That's what those odds mean.


I bet they didn't randomise for brand of deodorant and diet which makes this research almost meaningless.


Do you have any reason to think brand of deodorant and diet are strongly correlated with Parkinson's? (Especially pre-diagnosis.)

Ideal would be to check large numbers of undiagnosed people, and then see how many of those she "alerted" on developed the disease, but given the generally-low incidence of Parkinson's I suspect this approach would be impractical. Larger sample sizes than 12 would always be nice, of course.


Are you kidding? That'd be awesome if we could skip these expensive tests and diagnose Parkinson's by deodorant brand preference!


But in the real world, we do less rigorous studies first and follow up with more rigorous studies if our preliminary investigations show promise.


I always wonder if this is a good idea. While getting a false positive is not really a problem, because you're going to do a follow up experiment, what happens to the things we miss? If you do an experiment that doesn't really have a large enough sample size, or comes from a biased sample (because it's really an offshoot of a different experiment) and you decided that there is no effect, does it stop others from researching that effect? I suppose since we don't tend to publish negative results maybe it doesn't matter, but it's always something that has niggled at me.


The trade-off between Type I and Type II error is an inherent problem in research. But false positives are most certainly a problem, too. Just look at the issues psychology and biomedicine have been grappling with in terms of replication. Whole careers were wasted based on what seem now like false positives.


I can't tell if you're being sarcastic or not... But the "research" is literally just concluding "Hey, there may be a simple way to test for this incredibly hard-to-diagnose disease".


Well, if diet or deodorant causes Parkinson's, that's absolutely meaningful. ;)

The methodology should have been mentioned more in the article, and should be scrutinized, but that doesn't mean it's worthless if she truly diagnosed these people after a (single?) blind experiment.


> Well, if diet or deodorant causes Parkinson's, that's absolutely meaningful. ;)

You jest, but there is actually a suggestion that aluminum in sweat blocking deodorant causes neurological problems like Parkinson's and Alzheimer's.

(The evidence for this is not strong however.)


This wouldn't explain why she noticed a change in her husband's scent. I would assume he was using the same deodorant before and after the change.


Please avoid gratuitous negativity on Hacker News.

https://news.ycombinator.com/newsguidelines.html


it was a sample of 12 people, so it's already meaningless. this "research" is the justification for a study, not a study in and of itself.


While it's true this is mostly justification for further investigation, correctly categorizing 12/12 people into 2 categories actually has a p-val of .000244141 = (1/(2^12)), which would easily allow you to reject the null hypothesis of random categorization. The stronger the effect, the fewer samples you need.

We consider n=12 generally underpowered only because many real-world effects are way weaker than the ability this woman demonstrated.


What makes this result meaningless? The probability of her guessing all 12 correctly at random is: 0.0002 (i.e., approximately 0.5^12). So, it is far more statistically significant than many published results.


The minimal sample size depends on the effect you try to measure. Big effects can be validated with smaller sample size.


How is it simultaneously meaningless and also a justification for further study?


Meaningless to draw large scale conclusions on. It's a "This is something we should look more closely at" not a "Send this person around the country STAT"


Impressive but I don't see the point of diagnosing Parkinsons in early stages when there is no cure available.


Fortunately, everybody else does. Why would we not want to be able to discern life-threatening information sooner?

Not to mention the value in correlating another physiological change with the disease. Maybe research into how this works can get us closer to a cure.


> Why would we not want to be able to discern life-threatening information sooner?

Because there's a good chance you'll receive a treatment that will cause you to die sooner than had you not known about the disease for another decade. C.f. why they pushed back the recommended age for mammograms this week.


That sounds like a problem with the treatment, not a reason to remain ignorant about your own health for longer.


That is a great point. Treatment is often the nail in the coffin; Sepsis, for example.


Well a few quick thoughts against that:

1) Wouldn't you want to know say a year in advance, before you made all kinds of hospital visits and it was finally confirmed? I bet you would. Perhaps to start preparing for a different life, career, perhaps to advance future plans of things you'll gradually become less able to do. People want to know.

2) Can you imagine that if somehow, for example, you could detect Parkinson's by smell, that this would open up all kinds of findings, research and understanding about what Parkinson's is, how it works, how it's detected etc, that could potentially lead to better treatment or even a cure? I bet you can imagine there's a positive correlation between understanding something better and the ability to treat it in future.

3) it's simply interesting in and of itself. How curious, isn't it?


How do you know there is no cure for early diagnosis? Until today it wasn't possible to diagnose early.


If we learn something new about the disease because of this, who's to say it won't be curatively useful?


Looks like you never knew anyone that had a terminal illness.

And regardless while there isn't a cure per-say for Parkinson's there are treatments which can delay it's progression and the earlier you get them the more time you have.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: