Hacker News new | past | comments | ask | show | jobs | submit login

> uses barely less CPU% in a 2020 computer than in a 2011 one and it is extremely frustrating.

But if that's the case, what's the point in all that extra-complexity in the CPU, if in the end the benefits seem to be miniscule?




> But if that's the case, what's the point in all that extra-complexity in the CPU, if in the end the benefits seem to be miniscule?

They aren't. 10 years ago, single thread performance was achieved by upping the core frequency. That trend died when it hit physical limitations and we're stuck with 4-5 GHz ever since. In order to get more performance, all these tricks (caches, speculative execution, data-parallelism, etc.) had to be employed in addition to more cores.

In audio processing this means that a modern laptop can process more effects and tracks than a beefy workstation could in 2011. Sure, each single effect still taxes the CPU pretty bad; but in contrast to 2011 this means you can easily run dozens in parallel without breaking a sweat or endless fiddling with buffer sizes to keep latency and processing capability in balance.


all this "extra complexity", branch prediction, pipelines, multiple levels of cache, speculative execution was mostly there since the late 80s, 90s in CPU design ; the Pentium pro already had all of this to some level. The last decade was in large part about more SIMD and more cores and it's been a real PITA when your workflow does not benefit much from it because the state at t depends on the state at t-1. But the improvement of these features is definitely not negligible ; at the beginning of the SPECTRE / Meltdown / ... mitigations the loss of performance was double-digit big% in some cases.


This isn't really true. Micro-op caches are fairly new, branch predictors are massively improved, caches have gone from 1 level to 3, lots of operations have gotten way more efficient (64 bit division for example has gone from around 60 cycles to 30 cycles between 2012 and now). Out of order execution has also massively improved, which allows for major speed increases.


L3 caches have been in consumer Intel CPUs since 2008 and uop caches were already there in pentium 4 (released in 2000, almost 22 years ago :-)). Hardly new. Of course there are interesting iterative improvements, but nothing earth-shattering.


You might note that neither 2008 nor 2000 are the 1980s which was the time you previously referred to.


double-checked and L3 was actually also there in P4 in 2003 ; and P4 itself was in the works since 1998. For me that's closer to late 90s (which is also what I referred to) than today, that's almost as many years as there were between the two world wars...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: