Hacker News new | past | comments | ask | show | jobs | submit login
The Node Is Nonsense: Better ways to measure progress than Moore's law [pdf] (gwern.net)
71 points by saadalem on Aug 29, 2020 | hide | past | favorite | 7 comments



There’s plenty of room at the Top: What will drive computer performance after Moore’s law? (2020)

PDF: http://gaznevada.iq.usp.br/wp-content/uploads/2020/06/thomps...


I vaguely recall reading somewhere that for some nontrivial software (I forget what exactly) the speedup from hardware advances between Apple II and ~2000 was roughly equivalent to running the most modern iteration of the algorithms involved on the original machine.

I've terribly butchered this since the details completely escape me but you get what I mean. It feels like it could be true, which is... neat. This sort of thing certainly happened several times with gaming consoles where developers are able to squeeze every ounce of performance from the hardware at the very end of its generation.


Trends in algorithmic progress https://aiimpacts.org/trends-in-algorithmic-progress/ is the best work on this topic I am aware of.

2000x is believable, but that doesn't mean the latest algorithm will run on Apple II. Algorithmic speedup is often hardware relative. For example better cache locality is less important in older hardwares.


I find that the main takeaway is that the time/space/complexity tradeoff dimensions become immense with even modest Moore's Law scaling improvements.

On the actual back-in-the-day Apple II at time of release, you probably did not get a configuration that maxed out RAM and storage because it just cost too darn much. But as you got into the 80's, prices came down, and the Apple could expand to fit with more memory and multiple disks, without architectural changes.

As such, a lot of the early microcomputer software assumed RAM and storage starvation and the algorithms had to be slow and simple to make the most of memory, but Apple developers could push the machine hard when ignoring the market forces that demanded downsizing. It was a solid workstation platform. When the 16-bit micros came around the baseline memory configuration expanded remarkably, so completely different algorithms became feasible for all software, which made for a substantial generation gap.

By the 90's, the "free lunch" scenario was in full swing and caching became normalized, and so everything has been about faster runtimes, but at systemic scale, with layers of cache, it's often hard to pinpoint the true bottleneck.


This metric is predicated on the assumption that chips must be manufactured by printing transistors on 2D silicon. I'm sure when 3d silicon printing becomes possible we will have to generalize Moore's law to 3D.


I sometimes wonder, if it is just me.

No-one, no site I have read, whether they are tabloid type ( like Wccftech ) or professional type ever claims the end of Moore's law equals the end of transistor improvement. ( It just meant we wont be getting the improvement as quickly or as much )

And yet somehow, somewhere, some marketing reckons the Mass Thinks Moore's Law equals transistor improvement. So we are now "redefining" Moore's Law equals transistor improvement and it is not dead. As shown in the recent Intel Conference.

Which sort of make any discussion on the topic pointless.


Is there an HTML version? PDF version is impossible to read on mobile device




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: