Hacker News new | past | comments | ask | show | jobs | submit login

That threat has been looming over me ever since I had my commodore 64 in the nineteen eighties. I'm sure it will hit some theoretical limit at some point in the future. But I wouldn't go as far as to predict next year for that.

Of course we have been approaching nano scale now for a while and my 2012 mac was only able to perform at 70% of the build speed of my 2017 model, which in turn is about the same as the cheap Linux laptop I picked up a few months ago. GPUs are one area where things are still improving rapidly because you increase performance by simply having more cores.

My cheap laptop is impressive in the sense that it does what it does without thermal throttling or even heating up a lot. Not bad for a cheap 700 Euro i5 laptop. My 2017 mac book pro was struggling with keeping things cool.

The next leap is going to be an exponentially larger number of CPU cores. We've been stuck at 4-16 or so for the last decade or so. There's no other reason for this other than legacy compilers, languages and CPU architectures. Leveraging concurrency is just hard. GPUs kept on increasing number of cores and have been doubling fps for the same job much more reliably.

All of course amazingly quick compared to my trusty old C64.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: