Hacker News new | past | comments | ask | show | jobs | submit login
AI accurately predicted 70% of earthquakes a week in advance (openaccessgovernment.org)
40 points by geox 11 months ago | hide | past | favorite | 28 comments



"The AI accurately predicted 70% of earthquakes a week in advance, with 14 forecasts coming true… However, it issued eight false warnings and missed one earthquake." Beyond the fact that n is very small, I don’t think 14/23 is 70%…


My guess:

There were 20 real earthquakes and the AI issued 14 warnings. 20*70/100 = 14.

Of course the problem with that is that it’s always a balance between precision and recall (or specificity vs sensitivity).

Here they show 70% recall which sounds good, but the precision is 14/(14+8)=64% which is decidedly less impressive.

Those terms are defined in Wikipedia here: https://en.m.wikipedia.org/wiki/Precision_and_recall


Everyone knows that AI gets at least three mulligans.


"A week in advance" is overstating it. It forecast the week of the earthquake, counting a hit in a radius of 200 miles. That is probably exciting for the researchers, but of no current practical value.


70% of the time it works all the time


"Neither the USGS nor any other scientists have ever predicted a major earthquake. We do not know how, and we do not expect to know how any time in the foreseeable future. USGS scientists can only calculate the probability that a significant earthquake will occur in a specific area within a certain number of years."

https://www.usgs.gov/faqs/can-you-predict-earthquakes


How exactly does 14 "correct" predictions, 8 false positives, and 1 false negative equate to 70% accuracy?


The AI told them that % and nobody questioned it :)


Easy. 14÷(14+1+8) = 0.608695652 Oops.

Maybe they didn't include the missed prediction.

14÷(14+8) = 0.636363636

Maybe they rounded up. A lot.


It's a simple matter of cascading your rounding operations. 0.636 rounds up to 0.64 of course. That four is almost a 5 so round that up to 0.65. Everybody knows that it's fair to round up from a 5 so this is how they 0.7, but these amateurs forgot to subsequently round 0.7 up to 1. Shameful, really.


The article is so information-lite that a possibility I wouldn't rule out was that of 20 earthquakes in the trial, it made predictions for 14 earthquakes that were close enough to count as a success criterion, it made predictions for 5 earthquakes that were substantially off in location and/or magnitude but not entirely misses, 1 earthquake that it missed entirely, and 8 more earthquakes that it predicted that didn't occur.


~61% is what that sounds like.

Even lower it you take the f1 score...


I think it sounds more like 93% (14 predicted/(14 + 1) total earthquakes).

False positives aren't failures to predict an earthquake, they're a different error with a different consequences.

Of course, I'm not sure why I would trust the absolute numbers reported are more accurate than the percentages reported...


Sounds good. In that case, I just made a 100% accurate model by always predicting that every location on Earth is experiencing an earthquake all the time.


Should be 14/15 = 93.3%

Because: “of earthquakes”


The article seems to conflate accuracy and recall:

> AI-powered earthquake forecasting scores 70% accuracy

> The AI accurately predicted 70% of earthquakes a week in advance, with 14 forecasts coming true within 200 miles of their estimated locations and matching their anticipated magnitudes. However, it issued eight false warnings and missed one earthquake.

The precision was 14/(14+8) (64%) and recall was 14/(14+1) (93%) which means the F1 score was .756. The accuracy was 14/(14+8+1) (61%). I'm not sure where they got the 70% from, perhaps a different F metric. In any event, it's clear the author is confused about the terminology.


Where is the link to the original paper? Was this article written by AI?


How does this statistic compare to experts?


There isn't even a number. It's been generally regarded as just flatly impossible using current technologies. Even a partial step towards reliable earthquake prediction would be a massive advancement.

See https://en.wikipedia.org/wiki/Earthquake_prediction#Difficul...


Experts don't make predictions like this.


> The AI accurately predicted 70% of earthquakes a week in advance, with 14 forecasts coming true within 200 miles of their estimated locations and matching their anticipated magnitudes. However, it issued eight false warnings and missed one earthquake.

How does this work out to 70%? It made 23 predictions, 14 were right and 9 were wrong. 14 of 23 is 60.8%

I also wonder what counts as a success. If it predicts an earthquake and one occurs, but too weak to hurt anybody, does that count as success or a false warning? What do you do with earthquake warnings anyway, preposition yourself under a doorframe? I guess humanitarian organizations could use this to pre-stage emergency supplies, but if false warnings are common and true warnings don't seem actionable most people will probably ignore it.


I had whole comment questioning the usefulness of week before prediction. But prediction like that would save thousands of lives in my area. It is enough to evacuate the unreinforced masonry buildings. It is enough to evacuate the areas of coast swept by tsunami. It would require plans to deal with evacuees.

I still wonder about the accuracy, is it for specific time? It would also need to be reliable that worth evacuating thousands of people. Finally, how well does it work on different kinds of earthquakes?


If it technically predicts earthquakes but can't tell you which of the earthquakes will be strong enough to warrant evacuation, then will people heed the warning for the big one after they've had a few evacuation notices for imperceptible earthquakes? It's a "boy who cried wolf" problem.

The vast majority of earthquakes are very weak, so predicting the time and area of earthquakes seems worthless if they can't accurately predict the magnitude as well, with very few false positives.


actual paper? https://pubs.geoscienceworld.org/ssa/bssa/article-abstract/d...

the abstract:

> The proposed algorithm is trained using the available data from 2016 to 2020 and evaluated using real‐time data during 2021. As a result, the testing accuracy reaches 70%, whereas the precision, recall, and F1‐score are 63.63%, 93.33%, and 75.66%, respectively.


from UT's press office, https://news.utexas.edu/2023/10/05/ai-driven-earthquake-fore...

> The researchers said that their method had succeeded by following a relatively simple machine learning approach. The AI was given a set of statistical features based on the team’s knowledge of earthquake physics, then told to train itself on a five-year database of seismic recordings.

One common mistake with timeseries forecasting is leaking the evaluation set. It would be relatively easy to accidentally record the future into a DNN.


> The AI developed by the University of Texas took first place among 600 other designs in an international competition in China

600 dice rolls. I don’t think any null hypothesis can be disregarded.


Here's a program that will predict 100% of earthquakes a full day in advance:

    while true:
      print("there will be an earthquake in 24 hrs")
I really wish publishers would be a bit more careful with how they frame numerical claims.


How does this compare to equivalent human guesses at predicting earthquakes?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: