Hacker News new | past | comments | ask | show | jobs | submit login

That’s exactly what people said about AMD before. Then bulldozer and threadripper showed up.



Right, and Zen (I'm assuming you mean Zen) was great--but it succeeded only because Intel did nothing for years and put themselves in a position to fail. If Intel had tried to improve their products instead of firing their senior engineers and spending the R&D money on stock buybacks, it wouldn't have worked.

We can see this in action: RDNA has delivered Zen-level improvements (actually, more) to AMD's GPUs for several years and generations now. It's been a great turnaround technically, but it hasn't helped, because Nvidia isn't resting on their laurels and posted bigger improvements, every generation. That's what makes the situation difficult. There's nothing AMD can do to catch up unless Nvidia starts making mistakes.


They already are. The artificial limits on vram have significantly crippled pretty much the entire generation (on the consumer side).

On the AI side, rocm is rapidly catching up, though it’s nowhere near parity and I suspect Apple may take the consumer performance lead for a while in this area.

Intel is… trying. They tried to enter as the value supplier but also wanted too much for what they were selling. The software stack has improved exponentially however, and battlemage might make them a true value offering. With any luck, they’ll set amd and nvidia’s buns to the fire and the consumer will win.

Because the entire 4xxx generation has been an incredible disappointment, and amd pricing is still whack. Though the 7800xt is the first reasonably priced card to come out since the 1080, and has enough vram to have decent staying power and handle the average model.


I keep hearing conflicting accounts of ROCm. It is deprecated or abandoned, or it is going to be (maybe, someday) the thing that lets AMD compete with CUDA. Yet the current hardware to buy if you're training LLMs or running diffusion-based models is Nvidia hardware with CUDA cores or tensor hardware. Very little of the LLM software out in the wild runs on anything other than CUDA, though some is now targeting Metal (Apple Silicon).

Is ROCm abandonware? Is it AMD's platform to compete? I'm rooting for AMD, and I'm buying their CPUs, but I'm pairing them with Nvidia GPUs for ML work.


They released an SDK for some windows support a month ago. As far as I understand, it’s still being developed. A bit slow, but it’s not abandoned.


Bulldozer was the thing that almost killed amd.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: