Hacker News new | past | comments | ask | show | jobs | submit login

CPU manufacturers have too much chip real estate to know what to do with: The processor cores must be kept physically small to keep them fast. Multicore has predictably hit the software wall at 2-4 cores[1][2]. Caches also have diminishing returns (every doubling of the cache size gets 5-10% perf boost).

Intel has been adding features left and right where they can, and apparently the onchip memory controller is still simple enough to add more features to without performance hit. This is helped by memory being glacially slow compared to what happens in the CPU.

Biggest slice of Intel's x86 profits come from data centers (and that slice of pie is increasing), a major bottleneck of server workloads moving to "the cloud" is the lack of sound technical means to keep your data safe.

[1] Servers can use more cores but they can ditch the onchip GPU. GPU eats 4-8 cores worth of area in latest designs.

[2] Even in games, despite performance obsession and heroic optimization efforts, and pressure to squeeze most out of the ~8 cores in consoles: https://www.rockpapershotgun.com/2015/03/05/quadcore-gaming/




Interesting they haven't been able to conjure a desktop and mobile CPU which supports ECC RAM.


Most modern AMD processors support ECC. It's purely a artificial limitation as they want businesses to buy more expensive processors from the Xeon line.


> CPU manufacturers have too much chip real estate to know what to do with:

Agreed. Ultimately if Intel continues down this 'closed IP' route, their processor monopoly is going to be replaced with generic logic (FPGA-like processors) which will be dynamically programmable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: