Perhaps Moore's law is a consequence of people's ability to design algorithms. Hardware design needs similar resources and concepts: circuit routing, simulations, benchmarking, design re-use, code bases ( http://en.wikipedia.org/wiki/Hardware_description_language )...
This is a gratifying idea to think about. It implies that the "underoptimized software" woes are not really a problem of increased hardware causing laziness - they're a result of combined hardware and algorithm developments making problems _possible_ to solve that previously would have been too hard. That they could be solved with tighter code is ultimately less important.
Self driving cars offer, perhaps, a counter example:
It was a fundamental problem. In the mid-'90s, microchips weren't fast enough to process all the potential options, especially not at 55 miles per hour. In 1996, Dickmanns proclaimed that real-world autonomous driving could "only be realized with the increase in computer performance … With Moore's law still valid, this means a time period of more than one decade." He was right, and everyone knew it. Research funding dried up, programs shut down, and autonomous driving receded back to the future.
He was right, and everyone knew it. Research funding dried up, programs shut down, and autonomous driving receded back to the future.
It seems perhaps that this was a self fulfilling prophecy as we'll never know how much closer we could be now to self-driving cars if funding had not dried up.
We have self driving cars right now, see for example DARPA's urban challenge. We only need to increase reliability and decrease costs to make them suitable for the market. There are also some legal problems (e.g. who is responsible if the robot car kills a kitten), but technologically self driving cars are a reality.
That point is perhaps a bad example because of reliance in other kinds of technology like LIDAR. Innovation in sensors is a confounding variable in addition to algorithms and processor speeds.
which is not surprising