Hacker News new | past | comments | ask | show | jobs | submit login
Markov Chain Monte Carlo analysis of climate-change variables (nature.com)
80 points by bryanrasmussen on Feb 6, 2022 | hide | past | favorite | 12 comments



original title "Sensitivity of non-conditional climatic variables to climate-change deep uncertainty using Markov Chain Monte Carlo simulation" obviously too long, had a hard time finding a title that fit so if you can think of a better one I can edit if within window.


If you want a tutorial/framework for MCMC analysis, check out the R package “pomp”:

https://kingaa.github.io/pomp/docs.html


How does pomp compare to mc-stan? I thought that was the preferred tool these days. https://mc-stan.org/


Not sure! I'm less familiar with it, but I'm not an expert in the field at all. mc-stan seems a bit more general-purpose.


Pardon my ignorance, but why do you need mcmc to analyze sensitivity? Isn't sensitivity something related to the derivative of the output with respect to the parameter?


Yes, but we don't have a differentiable climate model. People are working on it, but it's not there yet. See https://clima.caltech.edu/ and https://dj4earth.github.io/.


When you have many parameters in multiple models with interdependent parameters in feedback cycles the Monte Carlo method lets you sample the space and explore faster. Especially in large scale simulations the execution speed limits exhaustively computing your way through the parameter space.

And because there is likely no simple set gradients to follow, I’d suspect the Markov probabilities let you further navigate local maxima and minima more effectively. Something like a breadcrumb trail of past paths through the forest.


Perhaps this is to do with parameter calibration? If you vary parameters until you get best fit, then use them to forecast forward, you will hugely understate the variation, and very possibly also bias the estimate.


There's no reason you couldn't take the expectation value of a derivative. Linearity of expectation means this will be equal the derivative of the expectation value.

I may be missing the point though; I'm only versed in the mathematics.


Actually, because differentiation is a limiting process, this relies on the bounded convergence theorem or monotone convergence theorem. However, I imagine this would hold in most distributions under consideration.


Indeed since expectation values make the constant functions integrable you'd have to do some pretty wild things before things goes wrong. In fact any bounded function (with bounded derivative) is going to be fine.

I also wonder if differentiation doesn't have some nicer properties anyway, being a linear operation as well. Typically you can get quite far by showing things work for a sufficiently broad class of functions and just extending things from there. In particular you could use the Fourier transform, which trivially commutes with taking expectation values.

You can always try something abhorrent like sin(e^(-1/x^2)) or something to make things go wrong though.


I believe this is because not all derivatives are possible via calculation and mcmc gives close approximation. But I could misunderstand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: