Hacker News new | past | comments | ask | show | jobs | submit login
What the Euro model ‘win’ over the American model means for weather forecasting (washingtonpost.com)
53 points by cryptoz on Oct 8, 2015 | hide | past | favorite | 26 comments



The Euro model win only demonstrates the effects of budget cutting on results. Meteorology computer modeling is nearly entirely dependent on supercomputer time, and the time and quality of supercomputing resources is a direct function of money available. Meteorologists can discover better equations for modeling the atmosphere, but without the computer resources to implement these models, they are useless.

Weather models are an O(n^4) problem (you solve in x, y, z, and time), so the computing resources needed to model the globe are vast. Right now, the GFS (Global Forecast System) resolution is 18 miles. That means it has to treat the globe as a grid of 18 mile cubes. 18 miles is big; it allows for a lot of smaller-scale weather features to slip through the cracks.

Ultimately, Less money -> Less computer time -> Worse model resolution -> Worse model output -> Worse forecast. Meteorology is one of those rare places where throwing money at the problem does fix the issue.

https://www.ncdc.noaa.gov/data-access/model-data/model-datas...


There is still a big gap of live atmosphere observations that are being fed into these models. Even if you throw money at them, you will not end up with increasingly better results since the initial state of the model will be the same. Throwing money at the model runs and the supercomputers will cease to provide positive returns at some point (and I think the Euro is probably near that point).

We need a dramatic increase in the number of observing stations as well as funding increase for the supercomputer time. I'm personally very interested in what we can do with smartphones right now; there are billions of internet-connected barometers that the Euro and the GFS do not use. Project like PressureNet[1] and Sunshine[2] are making huge strides there, and I think the future is very promising.

[1]: Android: https://play.google.com/store/apps/details?id=ca.cumulonimbu...

[2]: iPhone: https://thesunshine.co


You're right, but throwing money at more observation stations, satellites, and balloons would work as well. The general "throw money" solution would work.


> That means it has to treat the globe as a grid of 18 mile cubes

Although this is made somewhat easier in that the vertical component is done in pressure "levels", somewhat simplifying the equations to be solved. However that shouldn't underestimate the scale of the problem. IIRC a model with a surface resolution of 100km means solving for something of the order of 30 million variables for each time step.

Full scale climate models can get even harder, as you need to couple an ocean circulation model to a atmospheric circulation model - ocean circulation is less important for a weather model as its time scale is of the order of up to 1000 years.


It's not just inadequate computing power. Cliff Mass has a good run-down on his blog: http://cliffmass.blogspot.com/2012/03/us-fallen-behind-in-nu...

The number two reason on Cliff's list is "The U.S. has used inferior data assimilation", but that will be changing next week! The Ensemble Kalman Filter goes live next week: http://www.nws.noaa.gov/os/notification/tin15-43gefs.htm

It's unlikely this will cause an upset (the US is far behind), but it should at least close the gap.


Very insightful comment. I have long been interested in what the actual approach and factors are to generating better forecasts and you explained the important parts. Thanks!


If you have access to Nature, you might like this article [1]. Even if you don't, check out Fig. 1 to see how forecasting has been improving.

[1] "The quiet revolution of numerical weather prediction", http://www.nature.com/nature/journal/v525/n7567/full/nature1...



Thanks for both of these (also sub comment). These were great reads, kept me up last night.


Why give money to a solution before exploring other options? Decentralization, for example, seems like a more natural way of distributing meterological forecasting workload across the planet, by colocating sensors and computation on clusters of devices in close proximity. Decentralized computing is a natural fit for weather problems.


The mathematics that needs to be solved to produce weather forecasts isn't embarrassingly parallel in the way something like Ray tracing is or in the way the BOINC projects like Folding@Home take advantage of. The mathematics involved is attempting to model the continually interacting, global state of weather across the planet in order to determine what it will be like at various places on the planet.

These models are extremely reliant on being able to "share" data between parallel operations, if computer core 1 is doing a grid cell, and computer core 2 is calculating things about a neighbouring grid cell, it will need to know things about the grid cell core 1 is working on. These models run on supercomputers in order to facilitate efficient parallel processing in an environment where each node can access the memory of surrounding nodes since promptly accessing such memory is necessary for the models to be computed "faster than real time".


Distributed computing "across the planet" only works with low-bandwidth/low-latency tasks. Take a look at the various distributed internet projects (the Great Internet Mersenne Prime Search, Folding@Home, etc) and they are ones that process for a long time and send back a small amount of data, typically to a single server.

People use supercomputers for problems, like weather forecasting, which depend on high-bandwidth, low latency interconnects. For example, the Cray CS-Storm (which the Swiss National Supercomputing Centre will be using) supports "QDR or FDR InfiniBand with Mellanox ConnectX®-3/Connect-IB, or Intel True Scale host channel adapters" (quoting http://www.cray.com/sites/default/files/resources/CrayCS-Sto... ).

To prevent congestion, the supercomputer network topology might even be wired so each pair of neighboring nodes has a dedicated connection.


Err, "with low-bandwidth/high-latency tasks".


"Essentially, all models are wrong, but some are useful."

--- Box, George E. P.; Norman R. Draper (1987). Empirical Model-Building and Response Surfaces, p. 424, Wiley. ISBN 0471810339.


Aren't models "wrong" by definition, that's why they're models.


>The European model has a more powerful computer

I seem to remember a story on slashdot ~2 years ago(?) about US upgrading supercomputer for their forecast despite it already being 2x more powerful than European one while still being less accurate.


That's because while we're upgrading the hardware and software, for some reason government restrictions force the new models to be tweaked until they give the same output as the old models. Apparently someone feels that similar results are more "reliable" and don't see that maybe different results means BETTER, more accurate results than the old model.


Funny anecdote: where I work, one of our departments had a model we used to predict the working characteristics of our most popular product. Over time, we noticed that real-world data deviated from our calculations, so the model was regularly adjusted to bring the predictions in line.

Recently we had to model a new system, and none of our numbers looked remotely right. Looking at the code, the whole program had essentially been turned into a lookup table that was completely useless at actually 'predicting' anything we hadn't seen before. Since then, the system's been completely overhauled, and now it much more accurately predicts general characteristics even if it appears less precise than before.

My point's kind of gotten away from me, but I guess it's just easy to see how people can fall into the trap of leaning on perceived reliability and using it to avoid making radical changes.


Did they call their shot in advance, or is this another case of arbitrarily picking the best model by hindcasting?


As the quotes in the article indicate, it was an active debate whether the European model should be trusted more well before the outcome became clear.


The last line was a bit of a bummer, but it's in line with the US's non-military spending habits.


US military spending is nearing a 70 year low as a percentage of GDP. It has fallen by about $150 billion since 2009. By 2019 it's projected to hit about 3% of GDP. By comparison, France and the UK are around 2%. Military spending right now is not the primary problem facing the US fiscally.

Entitlements are the priority when it comes to US spending. They take up 60%+ of the federal budget. When you throw in the State and Local budgets (which combined are nearly the size of the federal spending), military spending is a modest fraction of US spending habits.


There are many ways to characterize numbers, but in the 2015 budget the US spends over $600 billion on the military, an enormous amount. The nation spends amounts of the similar magnitude on Medicare, Social Security, etc.

I'm not sure grouping line items into arbitrary categories like "entitlements" is meaningful for analysis (it's more for ideological contests). The questions are, what do we get for each investment? Is there a better use for that money?


Or, to put it another way, the US GDP is over 6 times that of the UK or France, so a 1% difference in GDP amounts to spending almost 10 times as much money. By comparison, China, the #2 spender by $ spends a smaller percentage of their GDP than either France or the UK. Further, the countries which tend to spend the largest percentages of their GDP on military spending tend to be countries with low stability, high chances of conflict with neighboring countries, and a large reliance on outside involvement to maintain some semblance of stability.

Entitlements, by definition, are paid for by the recipients in some way, and are often capped by how much the individual paid into the system. At times when the entitlements are especially needed, these limits are sometimes waived or modified, which is one source of entitlement problems. Another source of entitlement problems comes when there is less need for those entitlements and government is attracted by the large amounts of money going into those entitlements. The US (and much of the west) happens to be in a long period when there has been great need which also happens to have followed a very long period when there was very little need. So, most of the money was stolen to pay the bills for everything else in the 90s and the early years of the wars in Iraq and Afghanistan, then anything left in the markets was destroyed by the economic situations which followed, and eventually the entitlement systems will no longer be able to meet the demands placed upon them. Add the Baby Boomers to the Social Security system with a heavier reliance on it for retirement income (thanks to the near elimination of corporate retirement when they decided the massive amounts of cash on their books waiting for Baby Boomers to start collecting looked bad for their bottom line), and I'm sure the outlook is just wonderful for my chances of ever collecting the money I've put into any number of entitlements, unless I find myself in the most miserable situation of my life before the government finds a way to cut every last remnant of the safety net.


> There are many ways to characterize numbers, but in the 2015 budget the US spends over $600 billion on the military, an enormous amount. The nation spends amounts of the similar magnitude on Medicare, Social Security, etc.

As Krugman defined the US gov - it is an insurance company with an army attached. The executive branch has very little monetary freedom to execute.


Depending on how you look at it, any part of a government's budget can be defined as an entitlement(do you feel entitled to protection from invasion?). It seems to be just a word for government spending the speaker does not like.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: