Fifteen or so years ago I read someone mentioning that the ratio of processor state transitions per bit of program state was growing almost exponentially.
Anecdote: Starting in the early eighties I started using CAD programs. And they were painfully slow. Yet around 1995 suddenly they weren't 'slow'. That was huge. Since then improvements in processor speed and memory hasn't provided nearly as much functional improvement.
What I think is we're in a world were current processor designs are over kill of much of the programming tasks people need done. What people want is not more speed but better efficiency. That points towards using excess silicon to implement task specific coprocessors, not for more unneeded GPU performance gains.
This leaves Intel in a bad spot. Especially since everyone and their dog is loath to design in Intel parts if they can help it.
It was a bit of an eye opener when I saw an old NT 4.0 server again. I think it had around 500 Mhz and 128MB, i.e. quite a beast for its time. All clicks had immediate on-screen results, a real pleasure to navigate, felt like a 90FPS game. Same was true even for something like iPhone 3GS and iOS3. Nowadays processors are so fast that software doesn't need to be optimized much anymore, and this is what we got. Upside is we can soon have true cross platform GUI apps that are smooth everywhere - until the next big platform like AR comes.
Fifteen or so years ago I read someone mentioning that the ratio of processor state transitions per bit of program state was growing almost exponentially.
Anecdote: Starting in the early eighties I started using CAD programs. And they were painfully slow. Yet around 1995 suddenly they weren't 'slow'. That was huge. Since then improvements in processor speed and memory hasn't provided nearly as much functional improvement.
What I think is we're in a world were current processor designs are over kill of much of the programming tasks people need done. What people want is not more speed but better efficiency. That points towards using excess silicon to implement task specific coprocessors, not for more unneeded GPU performance gains.
This leaves Intel in a bad spot. Especially since everyone and their dog is loath to design in Intel parts if they can help it.