Hacker News new | past | comments | ask | show | jobs | submit login

I could use a TLDR on this one ...



We can't easily validate models of reality that are complex/stochastic, because any experimental result could be a statistical anomaly. In particular this is a problem for models of long-term and/or global phenomena like climate or public health, since we only get one "experimental run", so to speak. Therefore:

> Confronting the problems of complexity, validation, and model uncertainty, I have previously identified four options for moving ahead: (1) dispense with modeling complex systems that cannot be validated; (2) model complex systems and pretend they are validated; (3) model complex systems, admit that the models are not validated, use them pragmatically where possible, and be extremely cautious when interpreting them; (4) strive to develop a new and perhaps weaker scientific epistemology.


I haven't read the article, but #4 is the interesting item on that list. #3 is the Goldilocks option, the one that sounds between-the-extremes like the just-right porridge that Goldilocks ate—but it's not realistic, because "be extremely cautious when interpreting them" exceeds what the human mind is capable of.


It may be possible for a person to succeed with #3, but I have a hard time believing it could work in a community of people who disagree.

For #4, the closest thing I can imagine is a utilitarian approach, which of course I’m going to prefer as an engineer, and which (I think) ultimately reduces to #1. That is a tough problem I don’t expect to see solved in my lifetime, but I’d be happy to be proven wrong.


On reflection, option 1 includes like Taleb's philosophy. At first I wrote it off as leading to arguments like "no one knows so let's do what I, an 'expert' say", which I hate. But it's written generically enough to include rules like Minimax, where you don't need model estimates to make a decision. This rule is very conservative, though.

I think there are a fair number of people who succeed at option 3. I can think of a few people who I believe do option 3 well, but as you indicated, they're individuals. I think that the US Department of Energy is probably the closest organization that I'm aware of to succeed at this. (Or at least some DOE labs; I can say from personal experience that groups at both Los Alamos and Sandia take this fairly seriously, though I don't think it has fully permeated their culture.) I'll have to think more about this.

Option 3 is my basic approach, I'd like to think that I succeed at it, e.g.: https://news.ycombinator.com/item?id=23397785

A major problem with option 3 is that it runs the risk of people claiming to do option 3 but actually doing option 2. I think this happens regularly but it's due to ignorance, not malice. Ultimately I think we need to change scientific standards and the STEM curriculum before option 3 becomes tractable on a large scale, but even then I'm not sure it'll work because it'll always be easier to claim to do option 3 while actually doing option 2.

I agree with the writer that option 4 is more of a long-term goal. I wish the comments on this article focused more on #4 than what was discussed...


> (Or at least some DOE labs; I can say from personal experience that groups at both Los Alamos and Sandia take this fairly seriously, though I don't think it has fully permeated their culture.)

It has not. In the current funding environment it can't, given that the way research grants and contracts are awarded will always constrain organizational culture.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: