Hacker News new | past | comments | ask | show | jobs | submit login

Statisticians think it can be gotten down to 9.51 seconds ( https://www.sciencedaily.com/releases/2009/08/090806080343.h... ), though, as with most things statistical, knowledge of the underlying sampling process (biology/physiology & physics in this case) may shed more light. { Similarly, knowledge (& tweaking!) of OS schedulers may be able to reduce measurement noise more than pure statistics in a project to time software like https://github.com/c-blake/bu/blob/main/doc/tim.md .}



That research was published before Bolt hacked another 0.1 seconds off the record, when he cut it down in one event from 9.69 to 9.58

I don’t think the research stands up.


They would certainly have to add that new data point to their analysis. :-)

I expect it moves the estimate down by less than the 100 ms record beat, though. That's how converging to a true minimum tends to work (at least, like I said before, in the purely statistical modeling sense which is never quite as strong as a detailed micro-model).


The problem is, the stats ignores the evolving surfaces and shoe tech.


Unsure how either of my comments could be read as disagreeing with your point - both mention physics/micro-models being stronger than pure stats, but maybe in spite of your "The problem is" phrasing you only meant to amplify or only read quickly. That said, I can perhaps respond with something amplifying & clarifying of our shared skepticism of the pure statistics approach relative to something "more detailed" that might someday somehow help someone.

There are probably a half dozen micro-model effects even non-experts could rattle off that have "trended" over the decades from your shoes & surfaces to various aspects of population diet, young-in-life identification / more-optimized-maturation conditions and on & on. Statisticians call this a "non-stationary sampling process" meaning the independent & identically distributed (IID) assumption is at best a weak approximation and at worst totally misleading.

Ways to measure how much evidence there is that IID / other distributional assumptions are failing do exist { such as some of the ones here: https://github.com/c-blake/fitl/blob/main/fitl/gof.nim and referenced at the bottom of https://github.com/c-blake/bu/blob/main/doc/edplot.md (at the stage where one "plots / pools together multiple data points into some kind of "sample" with a "distribution") }. Sadly, few test such assumptions (which are rarely truly comprehensive anyway) and even small departures from modeling assumptions may lead to relatively large errors in estimates. E.g., the linked to Einmahl 2009/2010 research states this as an assumption to apply the ideas, but then shows no test of that assumption on the used data.


How do you know Bolt isn't close to the maximum? He is definitely an outlier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: