Hacker News new | past | comments | ask | show | jobs | submit login

Cool visualization.

It's worth keeping in mind that the modeled data lines up with reality because it's supposed to. That's how you calibrate your model, by making sure it fits reality.

The real trick is to see how well your model extrapolates from the data you have out into the future. As in, if you feed it data up to, say, 1990, will it correctly spit out 2015 temperatures that fit the reality of 2015, or will it spit out crazy 2015 predictions like the models that were built in 1990 did. And, the bigger question: How will its predictions for 2040 (given 2015 data) match up to the reality over the next 25 years.

We seem to be getting a lot better at the modeling side. That's a good thing, since the first couple decades of watching people panicking and fighting each other over whatever scary results came out of the first generation climate models wasn't any fun to watch.




"The real trick is to see how well your model extrapolates from the data you have out into the future."

That is the most common way to show the modeller is not shamelessly overfitting.:-| Another way, though, is less common but not vanishingly uncommon: the model may be so much simpler than the data it fits that overfitting is not a plausible explanation. (Roughly there are too many bits of entropy in the match to the data to have been packed into the model no matter how careless or dishonest you might have been about overfitting.) E.g., quantum mechanics is fundamentally pretty simple --- I can't quantify it exactly, but I think 5 pages of LaTeX output, in a sort of telegraphic elevator pitch cheat sheet style, would suffice to explain it to 1903 Einstein or Planck well enough that they could quickly figure out how to do calculations. Indeed, one page might suffice. And there are only a few adjustable parameters (particle/nucleus masses, Planck's constant, and less than a dozen others). And it matches sizable tables of spectroscopic data to more than six significant figures. (Though admittedly I dunno whether the non-hydrogen calculations would have been practical in 1903.) For the usual information-theoretical reasons, overfitting is not a real possibility: even if you don't check QM with spectroscopic measurements on previously unstudied substances, you can be pretty sure that QM is a good model. (Of course you still have to worry about it potentially breaking down in areas you haven't investigated yet, but at least it impressively captures regularities in the area you have investigated.)


It's not just a question of how the model extrapolates from the input data itself. The actual input data may be in question as well, because there are always judgments involved in deciding how to measure, what "unreasonable" datapoints will be discarded, etc.

Read, for example, here:

"It is indisputable that a theory that is inconsistent with empirical data is a poor theory. No theory should be accepted merely because of the beauty of its logic or because it leads to conclusions that are ideologically welcome or politically convenient. Yet it is naive in the extreme to suppose that facts – especially the facts of the social sciences – speak for themselves. Not only is it true that sound analysis is unavoidably a judgment-laden mix of rigorous reasoning (“theory”) with careful observation of the facts; it is also true that the facts themselves are in large part the product of theorizing. ..."

http://cafehayek.com/2015/04/theorizing-about-the-facts-ther...


While the general gist of your argument is right, I think there are some non-trivial ways to overfit. There are some 25 constants in the standard model apparently that describe the world around us to enormous precision. This is so little information that of course the trivial 'overfitting by encoding observations directly' will fail, but we could still be overfitting by having an excess number of variables: perhaps there's really some mechanism in neutrino physics that explains neutrino oscillation without needing some constants to describe how it really happens. This might in turn boost tremendously our predictive precision for neutrino oscillation to match the precision of the other more fundamental variables in the model, for example. But I think you're right that it's so little data that we have some strong information theoretic guarantees that at least the model will have predictive power matching the precision of previous measurements.


Well - that's true apart from co-incidence. You can have a very simple theory which says "x is directly caused by y" and there is a lot of good data, and a great fit. But it's kist a co-incidence and breaks down immediately.

Occam's razor is a rule of thumb and an aesthetic boon, but nothing more.

The real test is that you have a theory that is meaningful and has explanatory power. If it grants insight on the mechanisms that are driving the relationships or generating the data and these make sense - you are pretty golden.

Another one is that the theory makes unexpected predictions that you can then test. This is a real winner, and why complex physics is so well regarded.


I think the information theoretic approach to modeling concerns actually implies such "simpler is better" approaches as Occam's Razor. At least that's my take on [http://arxiv.org/abs/cond-mat/9601030], which derives a quantitative form of it.


I haven't read that paper, and the abstract makes my head spin! I'll have a look later, and try and figure out the argument. I agree with you that things like the I-measure are based on the idea that simpler is good, and it works well in practice - both in Machine Learning and in the real world - which is why humans tend to prefer it. But (the paper you cite aside) I don't know of a deep reason why simple is preferred by nature.

Also there is a deep cognitive bias here, perhaps we lack the machinery to understand the world as it really is!


> Occam's razor is a rule of thumb and an aesthetic boon, but nothing more.

Occam's razor is a bit more than that. It isn't just that given a theory X and a theory Y = X + ε, both of which fit the facts, you should prefer X because it's "cleaner" or more aesthetically pleasing or whatever. You should prefer X because you can prove it is more likely to be true.

https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_induc...


Do you happen to have this one to five pages of QM equations somewhere as reference? I would be very interested in reading that.


No, it was a thought experiment I made up, not an exercise I've ever seen performed: how abbreviated a description of quantum mechanics could I get away with and still convey the idea to on-the-eve-of-QM scientists?

The QM equations are naturally very short, the stuff that I would worry about expressing concisely are concepts like what probability amplitude is, how it connects to prior-to-QM concepts of probability, the interpretation of what it means to make an ideal measurement, stuff like that. I don't know of any bright concise formulation of that stuff, and I'm not sure how I'd do it. I am fairly sure, though, that 5 pages could get the job done well enough to connect to spectroscopic observations.

Note also in the original story it was intended to be given to Einstein and Planck, deeply knowledgeable in classical physics, so it'd be natural to use analogies that would be more meaningful to them than to the typical CS/EE-oriented HN reader. For example, I'd probably try to motivate the probability amplitude by detailed mathematical analogy to the wave amplitudes described by the classical wave PDEs that E. and P. knew backwards and forwards, and I don't think a concise version written that way would work as well for a typical member of the HN audience.


Scott Aaronson has some good motivation for "QM falls out naturally if you try and use a 2-norm for your probalities instead of a 1-norm." See http://www.scottaaronson.com/democritus/lec9.html


I believe he is referring to the 'postulates of quantum mechanics', you can find several formats from a quick google search.

Dirac, 1929: "The fundamental laws necessary for the mathematical treatment of a large part of physics, and the whole of chemistry are thus completely known, and the difficulty lies only in the fact that application of these laws leads to equations that are too complex to be solved."


I think you can do it, but you'd probably want to start with density matrices, or use the Heisenberg picture to keep your wavefunction super-simple. If we're talking to geniuses then maybe we can include a one-off statement, 'if det ρ = 0 so ρ = ψ ψ† for some "column vector" ψ, then the squared magnitudes of ψ's components are probabilities to be in that component's corresponding state.' to get the gist of it.


This sounds like a fantastic exercise to assign to physics majors in some sort of capstone class. What a neat idea. I may have to try this.


Don't know what you're referring to about crazy predictions, the 1990 models actually did pretty well, http://www.skepticalscience.com/lessons-from-past-climate-pr...


They seemed crazy at the time because most people didn't expect the level of warming that we are now seeing.


Cool visualization.

I agree, but I have to say their imagination on what other things might be causing warming is not very robust. For just one example, there are at least two major effects of burning:

1. Release of chemicals into the atmosphere (ex: carbon dioxide)

2. Directly heating the atmosphere

There are literally billions of air conditioners, heaters, cars, factories, etc that all generate heat. The effect of these billions of heaters throughout the world definitely increases global temperatures. After all, this effect is a reason why cities are warmer than their surrounding rural areas (1). This is relevant because direct heating should be temporary while greenhouse gas increases are cumulative.

Honest question - has anyone calculated the effect of the direct heating on the atmosphere from the billions of heaters we use vs greenhouse gas increase?

----

(1) http://www.smithsonianmag.com/science-nature/city-hotter-cou...


The primary yearly energy consumption is about 155,000 TWh and the volume of the ocean is around 1.33e9 km^3, so back of the envelop, the heat dissipated by energy consumption yearly is enough to raise the temperature of the ocean by 0.0001 C. Doesn't seem very substantial.


> this effect is the reason why cities are warmer than their surrounding rural areas.

Anthropologically produced heat is certainly a contributor, but as the article itself says, I believe the majority of the difference is more accurately attributed to the large amount of concrete in cities which takes much longer to dissipate heat.


They did the math already. The amount of directly released human heat is a drop in the bucket.


I would love to see it if you have a source.

EDIT Thank you.



This doesn't add much to what earlier respondents have said, but it was five minutes of fun to do.

The mass of Earth's atmosphere is about 5e18kg. Specific heat of dry air is about 1kJ/kg-°K. Total human energy consumption in 1990 (just to pick a year) was about 102,000 TWhr (3.6e20 J/TWhr. Wikipedia for most of the numbers.

Assuming all the energy consumed resolved into heat and only heated the atmosphere, then I get the one-year temperature increase due to human energy use during 1990 as about 0.073°K.

Less than I would have guessed, and probably wrong by at least a couple orders of magnitude due to simplifying away 99% of what's really happening.


What about the heating caused by ~7.5 billion humans? Collectively, we consume ~13 trillion calories, or (IIRC) enough heat to vaporize a 4-cubic mile block of ice, every day.


It's not my field, but I'd be very surprised if models were not also calibrated by extrapolating earlier known years and comparing to later known years.

You can really only judge models on days that was not yet available when they were created.


The shape of the curve is driven by the independent variable (CO2 concentration, volcanic activity, etc). The magnitude needs to get adjusted so that it doesn't produce inconsistent results when extrapolated backwards in time. Which is the problem with explaining the recent spike in temperature as anything other than CO2 concentration. Most of the other variables like solar flux are relatively steady so that if you increase their climate forcing effect you get a really bad fit in the 1900s which is non-physical.

There's also additional data like satellite measurements of the broadening of the absorption lines of CO2 and H2O in the IR blackbody spectra that the Earth radiates and the measurement of the shortfall of outgoing radiation in the radiation budget which are consistent with GHG effects and independently confirm these models.


Is there a scorecard for which models have performed well?


The IPCC, probably the best source overall for climate info, has in their reports visualizations showing predictions, and I think historical results, of multiple climate models. (Sorry but I don't have time now to find links and page numbers. Try the Summaries for Policy Makers.)

http://www.ipcc.ch/

Just reading their summaries, which are meticulously prepared and reviewed by hundreds of scientists, will make you more informed than 99.9% of the population, and more than most reading this thread.


None of the models have performed very well. You probably won't find a scorecard because it's embarrassing.


Depends, overall predictions from the mid 1980's where high and a lot of research has gone into why.

A significant part of the difference disappears if you adjust for CO2 produced vs predicted. Granted, you can argue that the older models needed to account for both, but what we want to validate is predictions of impacts not predictions of fossil fuel use.

It's extremely disingenuous to show a single line as the 'prediction'. There have been plenty of projections that include possible reductions in temperature. As well as a wide range of types of measurements.

PS: You can also do a lot of cherry picking on both sides: http://phys.org/news/2012-04-climate-eerily-accurate.html


Great points. To add a little more detail, predicting the results of greenhouse gasses in the atmosephere is science. Predicting the amount of gasses in the atmosephere requires predicting human economic activity in detail (how much, in what form, etc.), which is impossible, especially on longer timescales (imagine how many investors would love to know how to do that!)

The predictions I've seen, at least in the IPCC reports,[1] show not lines but confidence intervals that widen over time.

[1] https://news.ycombinator.com/item?id=9772353


They've not performed well based on criteria chosen by global warming denialists.

For example, you'd be hard pressed to find a climate scientist making any solid predictions about annual global temperature averages. You will, however, see predictions about decadal averages, and those have borne out.


Borne out? Compared to what? The RSS and UAH 6.0 lower troposphere predictions? What other measurements do we have that don't have uncertainty bands as large as the measurements?

And the zero trend from May 2015 extends back to 1996 for RSS and UAH 6.0.


That "zero trend" is only there if you use annual averages.

Again: use decadal moving averages, and an entirely different picture comes up.


How do you use decadal moving averages from satellite data that has only existed since 1979? You'll get roughly the same trend as the full data set. Even so, the last two decades would still be flat, or very nearly so.


Hint: decades don't have to start with year 10*n+0.


We already know the full data set has a 1.2K/century trend (this is annual trend most commonly used to represent the data). Decadal moving averages aren't going to shed more light than that. We also know that if you just grab the last 19 years and 6 months, or any smaller subset of that, you'll see 0 to negative trends.


Which is why you should not follow along the denialist gambit and grab just those two data points.


But they're the best data we have. They have the widest coverage, the least uncertainty. At nighttime the SST satellites can have over 10C of error due to cloud cover. I've no doubt the earth is warming. My doubt is that measurements with wide confidence intervals should be used over those with low confidence intervals because they tell a more compelling story.


What good is a model if it does not make solid predictions?


Oh, those models do make predictions. Just not the ones the deniers use for straw man arguments.


Citation needed.


I think http://judithcurry.com/2013/10/30/implications-for-climate-m... is a reasonably representative informed article on this.


Here's a good one:

http://www.drroyspencer.com/2013/04/global-warming-slowdown-...

44 Climate Models all fighting to out-panic one another, not a single one guessing low enough to predict the actual values for 2012 (when it seems the dataset in question ended)

... and a seemingly more reputable one showing roughly the same thing:

http://phys.org/news/2015-01-peer-reviewed-pocket-calculator...


I wouldn't call a paper whose lead author is a well-established denier with no scientific training (https://en.wikipedia.org/wiki/Christopher_Monckton,_3rd_Visc...), and which is co-authored by a known practitioner of large-scale scientific fraud for pay (http://rationalwiki.org/wiki/Willie_Soon) particularly reputable.


ad-hominem. Who cares who wrote it, what does it say?


Ad hominem fallacy fallacy: http://laurencetennant.com/bonds/adhominem.html

Pointing out that someone is not trustworthy when considering whether or not to trust their conclusions is not ad-hom.


Attacking a persons 'trustworthiness' instead of dealing with their arguments and evidence is pretty much the dictionary definition of the ad-hominem diversion. It doesn't interest me to learn that he kicks cats or dresses in lingerie and calls himself Marjorie at the weekends. If you believe that he is wrong, then show where and how he is in error.


You inspired me to write a thing which will save me a lot of time in the future. Thank you.

http://www.robsheldon.com/tactics-of-crackpot-debate/#4


You're right not to be intersted in whether he kicks cats or not when you're thinking about whether he's honest or not.

But, when thinking about whether he's honest or not being given examples of previous dishonesty is relevant.


Just like the infamous Smathers campaign speech?

http://msgboard.snopes.com/cgi-bin/ultimatebb.cgi?ubb=get_to...

"Are you aware that Claude Pepper is known all over Washington as a shameless extrovert [pervert]? Not only that, but this man is reliably reported to practice nepotism [necrophilia] with his sister-in-law and he has a sister who was once a thespian [lesbian] in wicked New York. Worst of all, it is an established fact that Mr. Pepper, before his marriage, habitually practiced celibacy [???]."


No, because that was actually irrelevant. In this context, Soon's record within the scope of climate research is what's being scrutinized, not his personal life.

If Soon's opponents were attacking his love of Dune or his tendency to eat falafel, there might be an analogue here.


Irrelevant. The technique you used was the same as Smathers, and your intent was the same - to damage someone's reputation by insinuations and smears. It is low behavior.


Smathers' accusations related to issues that had no bearing on Peppers' merit as a political candidate or his ability to carry out his official duties. My 'insinuations' (actually, again, statements of fact) are related to Soon's behaviour within the context of climate science. If you cannot grasp this, you are not qualified to engage in debate. If you do not wish to for whatever reason, it makes it pretty clear that you are not interested in good faith discussion of this issue and are not worth anyone's time in that regard.


A damaging and false insinuation is a damaging and false insinuation, whatever ground it purports to cover. Smathers chose smears that would do the maximum damage to Pepper as a politician, you did the same for Soon as a scientist.


Actually, it is.

You can't look at someone's financial interest to know whether what they said is true or not. Similarly for any other attribute about them that you don't like.

There are many great thinkers who were gay. We don't invalidate their work because of that.

At best, you need to keep that in mind and take what they said with a grain of salt. Funding gives you a clue about which areas to be more critical about, but just because they have an interest one way or the other doesn't invalidate what they said.

If someone has been found to be a nutjob, you may casually dismiss what they said as a time saving device or because there is low probability what they say has any value to you. But even a nutjob is sometimes right.


Really? You think it doesn't matter that the primary author on a paper about climate science doesn't even have an undergraduate-level education in the subject? That the second one credited has a history of accepting large sums of money to write papers endorsing spurious claims DIRECTLY RELATING to climate change?


A lot of the IPCC lead authors are paid by NGO's (like Greenpeace) with a vested interest in climate alarmism. Do we discount their work too?

Climate science covers a lot of different areas, everything from economics, through hard chemistry and fluid dynamics, to pure statistics. No one person can be an expert on all of this, and no one qualification will make anyone competent in all of them. Experts from related disciplines are perfectly qualified to speak on "their" areas of climate science.


Greenpeace is a non-profit, so they have much less to gain from 'climate alarmism' than the fossil fuel industry does from climate denial.

Monkcton studied classics and received a post-grad diploma in journalism. That's pretty far removed from being a related discipline.


Are you going to tell me that every person who has ever written a paper on computer science needs to have a degree in it? While I won't question this guys qualification might be questionable - making a blanket statement that someone must be specifically educated in a subject to write a good paper on it is specious.


How often does it happen that a layman manages to get published in a well-regarded journal? Out of all the papers that laypeople publish anywhere, how many survive scrutiny from experts in the paper's problem domain? And out of those, how many that actively seek to overturn a paradigm succeed?

Based on this metric alone, it is highly unlikely that Monkcton is qualified to discuss climate change, and as it happens, his published work tends to be published by fairly obscure journals whose standards of review are questionable, and when they pass the desks of career climatologists, the result is generally unfavourable to him.


There is a difference between "layman", "well known expert in their field", "so and so with a degree in $field" "well known expert in their field with a masters in $field"

If you read my reply, I don't question the guys qualifications, I was objecting to the blanket statement of "you must have a degree in $field, to be expert" - many papers in technology, are written by people without degrees in that field.


I did read your reply; I'm saying that in the aggregate, a credible paper is unlikely to be written by someone without formal schooling in the relevant field.

Further, technology is applied science - it is not unlikely that one can become an expert through informal and professional practice. Your previous comment was about computer science, which is not necessarily the same thing, and which is closer to mathematics than anything else. Climatology is concerned primarily with physics and chemistry, but also geology and in some cases, paleontology. Most of these fields share little in common with pure maths or engineering. The comparison, then is not totally valid.

The basic training you require to be a competent scientist is hard to come by outside of academia. The actual work of science tends to be done in a laboratory. It's highly, unlikely, then, that someone who has put in the years (often decades) of work in academia to be on par with a hobbyist, whatever that may look like in this context.


Neither you nor the person you are responding to probably has the requisite qualifications to actually tell …

Judging something like this without relying on outside signals seems rather impossible and pointless if you are not, you know, an actual expert. No matter how much you want to believe you can be one about everything …


> co-authored by a known practitioner of large-scale scientific fraud for pay (http://rationalwiki.org/wiki/Willie_Soon) particularly reputable.

Flagged for libel. It is one thing to argue a position you believe, it is quite another to smear another's character.


It wasn't libel the last time you brought it up and it isn't libel this time. Soon failed to disclose non-trivial amounts of funding that he received from parties who have a vested interest in deriding climate science. Given how often his work has failed to pass muster when scrutinized by climate scientists and skeptics, it is hard to fathom how any of this can amount tosimple incompetence.


The article you linked to is almost comical in its petty malevolence, well beyond the point of self-satire. This kind of character assassination, however reprehensible, is ultimately irrelevant. If you believe Dr. Wei Hock Soon is wrong, then show where and how he is mistaken.


Climate scientists have been doing that for almost 25 years at this point, and Soon's response has pretty much been to complain that he's being bullied and that science is being politicised. I find that to be actually comical, almost as much as the presumption that an intelligent and intellectually honest person could do this for as long as Soon has. And that his association with political and industrial think tanks is a non-sequitur in this regard.


[flagged]


FWIW, I never took you seriously, because I cannot imagine a serious adult flagging someone for libel for stating an unpleasant fact.


I flagged you for libel because you lied.


I'm not going to flag you for libel here, because I'm sure you believe this, and that is your cross to bear.


So sue me. It is demonstrable that you published falsehoods.


The definition of demonstrable is not "that which I really, truly, believe from the bottom of my heart".


Please stop, both of you.


The first graph on that second link is a bit confusing and seems pretty disingenuous. It has the "observations" region stretching to 2050. The rest of the article seems much more factual and interesting, but why start with something so misleading if your supposed goal is to debunk misleading projections?


Do read this criticism of Roy Spencer's methods, which to me do not appear credible: http://blog.hotwhopper.com/2014/02/roy-spencers-latest-decei... http://blog.hotwhopper.com/2014/05/roy-spencer-grows-even-we...


Not the most credible looking site, but the graphic seems well cited: http://www.drroyspencer.com/2013/04/global-warming-slowdown-...


Spencer is pretty out there. He's gone on record to say that warming proponents are advancing an argument that will lead to more deaths than the NSDAP's policies did, and is a signatory to the Evangelical Declaration on Climate Change, which suggests that this is largely a matter of faith for him...


He does also maintain one of the satellite records, which does show global warming over the period 1960-2000 (not so much the last 10 years because of the global warming hiatus).


And other records do not show such a hiatus (http://www.skepticalscience.com/global-warming-not-slowing-i...).


Did you even read the article? look at the university of York dataset, which clearly shows the 1960-2000 warming followed by the 2000-2010 hiatus. Note the York dataset is strictly observationally independent of the UAH dataset.


Evidently we were reading different articles,because the author emphatically doesn't concur with your interpetation of the York dataset.


Did any models predict the "global warming hiatus" ?


Hans von Storch, professor at the Meteorological Institute of the University of Hamburg discussed this issue in a recent interview with Der Spiegel. He remarked that less than 2% of model runs reproduced the 'pause'.

http://www.spiegel.de/international/world/interview-hans-von...

SPIEGEL: Just since the turn of the millennium, humanity has emitted another 400 billion metric tons of CO2 into the atmosphere, yet temperatures haven't risen in nearly 15 years. What can explain this?

Storch: So far, no one has been able to provide a compelling answer to why climate change seems to be taking a break. We're facing a puzzle. Recent CO2 emissions have actually risen even more steeply than we feared. As a result, according to most climate models, we should have seen temperatures rise by around 0.25 degrees Celsius (0.45 degrees Fahrenheit) over the past 10 years. That hasn't happened. In fact, the increase over the last 15 years was just 0.06 degrees Celsius (0.11 degrees Fahrenheit) -- a value very close to zero. This is a serious scientific problem that the Intergovernmental Panel on Climate Change (IPCC) will have to confront when it presents its next Assessment Report late next year.

SPIEGEL: Do the computer models with which physicists simulate the future climate ever show the sort of long standstill in temperature change that we're observing right now?

Storch: Yes, but only extremely rarely. At my institute, we analyzed how often such a 15-year stagnation in global warming occurred in the simulations. The answer was: in under 2 percent of all the times we ran the simulation. In other words, over 98 percent of forecasts show CO2 emissions as high as we have had in recent years leading to more of a temperature increase.


Confused: this is the hottest year on record worldwide. In what way is this 'taking a break'?


Simple: this is exactly what you would expect on a high plateau. Think about it in terms of climbing a mountain with a fairly flat top. For a long time you're moving continuously up-slope, then when you get to the plateau you wander around randomly and frequently find outcroppings that are higher than anything you've encountered before. That doesn't mean you're still climbing, and if we were still climbing at the rate seen from 1980-2000 the "global mean temperature" (which is a thermodynamically meaningless arithmetic average) would be even higher than what we see today.

People who continually beat on extrema (like Denialists who claim that cold weather on the East Coast last winter is somehow proof that AGW isn't happening) are adding noise to the argument, not signal. The physically meaningful number is the heat content of the Earth/ocean system, and there's quite a bit of evidence it is rising, and that a significant portion of that rise is due to human activity.


And the ten hottest years have been since 1997?


This makes me think of when financial journalists/broadcasters constantly report that the SPX or the DJIA or the FTSE or whatever are hitting 'all time highs' and it's a really useless piece of information. Investors want to know how much it went up by on the day (and what he trend of the last few days/months has been), the fact that it poked through to a new high level is not important.


Except - temperature! It does actually matter if its 100 or 200 degrees.


Apparently, the past models did not, because they did not model the long term interaction of the oceans with the atmosphere, and the current "hiatus" is mostly about the atmosphere temperatures, while most of the warming is currently happening in the oceans.

The more sophisticated current models do match the recent observations if you feed them the past data:

http://www.theguardian.com/environment/2014/sep/09/research-...


RSS troposphere's hiatus goes back to December 1996 as of May 2015.


Probably a long list of ExxonMobil shill sites.



the grandfather from 1981 seems to hold up pretty well. simple linear models, very readable paper.

http://www.realclimate.org/index.php/archives/2012/04/evalua...


Yes there is! In fact, this very visualization which you are commenting on is based upon the data produced by NASA Goddard from their climate model by running a historical data prediction, as their contribution to precisely such a comparison/consensus building study sponsored by the IPCC, called the "Coupled Model Intercomparison Project Phase Five". There are links to the sources for all of this in the original article. This information, which I'm sharing with you now, is in the first three paragraphs of endnotes on the actual article.

So.... please don't believe other posters who might come along and try to get you to believe that historical comparisons haven't been performed, or the results haven't been published, or that the IPCC ignored them, or whatever else they might choose to argue today.


Even if there is a model that performs well, how do you account for survivorship bias?

And how accurate are models without a clear understanding of the physical reasons why behind them?

Seems like we should be concentrating on understanding the physical relationships and less on trying to come up with abstract models. It kind of feels like a bunch of astrologists and just picking the one that seems to perform better.

Unless I understand the why, I have a hard time accepting what anyone says.

Anyone can fit a model to the data.


Here are some images that you could review.

https://www.google.com/search?q=1990s+climate+models+versus+...


Exactly. When modeling most data, you would hold back a validation set, but that doesn't really work as well in this situation. The only thing holding back the validation set is time.


The thing is, it's not just the temperature measurements fitting the greenhouse gases. It's also all the biological and physical evidence. Bird migrations, plant species ranges, glacier and ice cap shrinkage and growth, permafrost melting, polar vortices occurring, etc.,etc.,etc. Screw the models: look at the reality.


There's a lovely book called The Limits to Growth published in 1972, through the years authors have updated their book and their models (there's more than 20). It turned out that business-as-usual model extrapolated very well from '70s to '00s. So, even them modelling was fairly good.


Yip, great book, good warnings, of which we have done very little about. Another book to recommend is "This changes everything" ... sets out a very reasonable argument while capitalism as it stands is basically incompatible with doing much to prevent climate change. Worth a read.


Herein lies the rub.

Corrupt politicians will find that using environmental concerns and climate change gives them the proper motivation to say "we need more control and the common man needs fewer freedoms". This is why it's a political issue.

Don't expect masses of people to gobble up the idea that climate change is going to ruin the planet when the motivating factor for a good portion of the people selling the idea is that they can seize more control.

It's no different than terrorism and things like the Patriot Act. Terrorism is a horrible problem and no one wants armed rebels chopping people's heads off in the streets of our cities. But when politicians start their backroom meetings and connive a way to start chipping away at our freedoms and our privacy (I'm being redundant), you start finding terrorism skeptics.

We have to find the proper balance between the rights of the individual and the rights of the community. But all the while the entity that sits between the community and the individual, government, is taking more and more control. And they seem to let no opportunity to do so pass.

Climate change is one of their new, favorite vehicles.


Probably unintuitive UX. I kept scrolling wanting to read something. But it kept changing graph and then suddenly text appeared.


> if you feed it data up to, say, 1990, will it correctly spit out 2015 temperatures that fit the reality of 2015, or will it spit out crazy 2015 predictions like the models that were built in 1990 did

Yeah, so they pick models until they find one that fits both 1990 and 2015? That would be using the test data to train the model - like the Baidu approach.


I have absolutely no knowledge about this field, but from what I understand people who study the Sun wouldn't agree as much with the numbers about the suns temperature.

I will see if I can find the numbers.


Really, I think from what I've read, there was some talk about sun-spots, which actually lower the sun's temperature or something. I'm really not convinced that they are measuring the correct value there at all.....


This visualization is a multivariate linear regression with time trending variables...lol the entire thing is garbage, I could get a better R^2 than the 7 or so variables they used if instead I used variables to explain climate change like: number of gay marriages in the world, murders, abortions, etc...I don't recommend this, I'm just saying trended data can "say" anything


Um, no it's not. It's a physical model. Read the PDF they link to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: