Hacker News new | past | comments | ask | show | jobs | submit login

I think AMD has always had the talent, but made a bad bet with bulldozer's architecture of two cores sharing a front-end and FPU[1]. I also think desktop software (specifically games) moved faster than some anticipated when it came to taking advantage of multiple cores, causing them to be seem resource starved.

[1] At least that's how I think they work. One decode and FPU per pair of integer cores, right?




Bulldozer's failure was really one of market positioning. It would have been much better received if the CMT modules had been presented as a single, dual-threaded core, positioned against an Intel HTed core, rather than as two separate cores.


While I generally agree that CMT threads are not a real "core" (and has never been presented as such by any other company that has explored CMT), the real problem with Bulldozer was a frequency-optimized, deeply pipelined design. AMD literally repeated the exact mistake that Intel did with Netburst. It would never have been good no matter how you positioned it.


And Apple with their low clocked super-braniac cell phone chips has been achieving surprisingly good performance for their power envelope.


Even if Bulldozer had been marketed as quad-core it was still losing to Intel quad-cores and it had a larger die.


Nevermind the power consumption. I had an FX9590, which was a 4.7ghz 8-core at 220w.

220 watts!!!


And my R7 1700 pulls up to 230w at 4.1GHZ all core. Some things never change, except vastly more performance per watt.


I'm typing this on a bulldozer system, it works fine for compiling software which is what I mostly do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: