Hacker News new | past | comments | ask | show | jobs | submit login

Have you read the paper? They adjust for damn near everything including reverse causation. What confounders do you believe the UK biobank data can't account for?



It doesn't matter if you eliminate 10 million confounds. No matter how good your correlation study is, it cannot establish a causation. To do that you need random assignment at the very least.


10 million confounds is a lot to eliminate. Randomization only works because it decreases the probability of a spurious association existing. At some point the number of confounds eliminated would probably be more worthwhile. I'd definitely pay more attention to a study that eliminated 10 million confounds than most randomized controlled trials.

I agree a randomized controlled trial would be nice now. But even that is potentially fraught. For example, someone else mentioned the possibility that glucosamine reduces joint pain, which increases mobility, which increases longevity. Randomizing wouldn't really control for that sort of scenario. And that's not even getting into preregistration, meta-analysis etc.


... and even that gives you the wrong answer if your diagnostic specificity is faulty.

This last is why we keep seeing muddled reports that "antidepressants don't work". Diagnosis of "depression" is a snakepit; the only known way to discern which among at least six conditions you might have, all labeled "depression", is to see what medications work, if any. Imagine trying to conduct a plausible RCT for that.

"Gold standard"? Ha.


Did you reply to the wrong comment?


I replied to "...it cannot establish a causation. To do that you need random assignment at the very least."


Oh, your quoting of "gold standard" really threw me off.


Adjusting for the wrong confounders is one of the ways to create spurious correlations. For example, consider the causal system

    X -> Z <- Y
in which X and Y are random coin tosses (heads=0, tails=1) and Z is their sum. Even though X and Y independent, they will be correlated if you control for Z.

If you want to see it yourself, let's simulate this system in R:

    # Simulate 1000 trials in which X and Y are random coin tosses (T or F)
    # and Z is their sum.
    
    > X <- rnorm(1000) < 0.5
    > Y <- rnorm(1000) < 0.5
    > Z <- X + Y

    # As expected, X and Y are not correlated:
   
    > summary(lm(Y ~ X))

    Call:
    lm(formula = Y ~ X)

    Residuals:
        Min      1Q  Median      3Q     Max 
    -0.6947 -0.6893  0.3053  0.3108  0.3108 

    Coefficients:
                 Estimate Std. Error t value Pr(>|t|)    
    (Intercept)  0.694704   0.025816  26.910   <2e-16 ***
    XTRUE       -0.005455   0.031330  -0.174    0.862    <<< No correlation.
    ---
    Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

    Residual standard error: 0.4625 on 998 degrees of freedom
    Multiple R-squared:  3.038e-05, Adjusted R-squared:  -0.0009716 
    F-statistic: 0.03032 on 1 and 998 DF,  p-value: 0.8618

    # However, if we include Z as a possible confounder, we create a
    # spurious correlation between X and Y:
    
    > summary(lm(Y ~ X + Z))

    Call:
    lm(formula = Y ~ X + Z)

    Residuals:
          Min        1Q    Median        3Q       Max 
    -2.45e-14 -8.40e-18  2.04e-17  2.04e-17  3.59e-15 

    Coefficients:
                  Estimate Std. Error    t value Pr(>|t|)    
    (Intercept) -3.370e-16  6.292e-17 -5.356e+00 1.05e-07 ***
    XTRUE       -1.000e+00  8.241e-17 -1.213e+16  < 2e-16 ***   <<< Oops!
    Z            1.000e+00  5.873e-17  1.703e+16  < 2e-16 ***
    ---
    Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

    Residual standard error: 8.582e-16 on 997 degrees of freedom
    Multiple R-squared:      1,     Adjusted R-squared:      1 
    F-statistic: 1.449e+32 on 2 and 997 DF,  p-value: < 2.2e-16


I do not know why you are simulating this system.

Given random x, y, controlling for z = x + y trivially generates a perfect anticorrelation as x = z – y.

Like, I get that you want to make a point, just seems weird to have a numerical solution in 5 lines of code to a problem that was already theoretically solved when you wrote down the problem statement and therefore required exactly 0 equations...?

Also like the theoretical solution clarifies that this happens for arbitrary probability distributions equally, it is not specific to Bernoulli random variables.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: