Hacker News new | past | comments | ask | show | jobs | submit login

A bit surprised that they're using HBM2e, which is what Nvidia A100 (80GB) used back in 2020. But Intel is using 8 stacks here, so Gaudi 3 achieves comparable total bandwidth (3.7TB/s) to H100 (3.4TB/s) which uses 5 stacks of HBM3. Hopefully the older HBM has better supply - HBM3 is hard to get right now!

The Gaudi 3 multi-chip package also looks interesting. I see 2 central compute dies, 8 HBM die stacks, and then 6 small dies interleaved between the HBM stacks - curious to know whether those are also functional, or just structural elements for mechanical support.




> A bit surprised that they're using HBM2e, which is what Nvidia A100 (80GB) used back in 2020.

This is one of the secret recipes of Intel. They can use older tech and push it a little further to catch/surpass current gen tech until current gen becomes easier/cheaper to produce/acquire/integrate.

They have done it with their first quad core processors by merging two dual core processors (Q6xxx series), or by creating absurdly clocked single core processors aimed at very niche market segments.

We have not seen it until now, because they were sleeping at the wheel, and knocked unconscious by AMD.


> This is one of the secret recipes of Intel

Any other examples of this? I remember the secret sauce being a process advantage over the competition, exactly the opposite of making old tech outperform the state of the art.


Intels surprisingly fast 14nm processors come to mind. Born of necessity as they couldn't get their 10 and later 7nm processes working for years. Despite that Intel managed to keep up in single core performance with newer 7nm AMD chips, although at a mich higher power draw.


That's because CPU performance cares less about transistor density and more about transistor performance, and 14nm drive strength was excellent


For like half of 14nm intel era, there was no competition on CPU market in any segment for them. Intel was able to improve their 14nm process and be better at branch prediction. Moving things to hardware implementation is what kept improving.

This isn't the same as getting more out of the same over and over again.


Or today with Alder Lake and Raptor Lake(Refresh), where their CPUs made on Intel 7 (10nm) are on par if not slightly better than AMD's offerings made on TSMC 5nm.


Back in the day, Intel was great for overclocking because all of their chips could run at significantly higher speeds and voltages than on the tin. This was because they basically just targeted the higher specs, and sold the underperforming silicon as lower-tier products.

Don't know if this counts, but feels directionally similar.


Interesting.

Would you say this means Intel is "back," or just not completely dead?


No, this means Intel has woken up and trying. There's no guarantee in anything. I'm more of an AMD person, but I want to see fierce competition, not monopoly, even if it's "my team's monopoly".


Well the only reason why AMD is doing good at CPU is becoming Intel is sleeping. Otherwise it would be Nvidia vs AMD (less steroids though).


EPYC is actually pretty good. It’s true that Intel was sleeping, but AMD’s new architecture is a beast. Has better memory support, more PCIe lanes and better overall system latency and throughput.

Intel’s TDP problems and AVX clock issues leave a bitter taste in the mouth.


Oh dear, Q6600 was so bad, I regret ever owning it


Q6600 was quite good but E8400 was the best.


Q6600 is the spiritual successor to the ABIT BP6 Dual Celeron option: https://en.wikipedia.org/wiki/ABIT_BP6


ABIT was a legend in motherboards. I used their AN-7 Ultra and AN-8 Ultra. No newer board gave the flexibility and capabilities of these series.

My latest ASUS was good enough, but I didn't (and probably won't) build any newer systems, so ABITs will have the crown.


The ABit BP6 bought me so much "cred" at LAN Parties back in the day - the only dual socket motherboard in the building, and paired with two Creative Voodoo 2 GPUs in SLI mode, that thing was a beast (for the late nineties).

I seem to recall that only Quake 2 or 3 was capable of actually using that second processor during a game, but that wasn't the point ;)


E8400 was actually good, yes


What? It was outstanding for the time, great price performance, and very tunable for clock / voltage IIRC.


Well overclocked I don't know, but out-of-the box single-core performance completely sucked. And in 2007 not enough applications had threads to make it up in the number of cores.

It was fun to play with but you'd also expect the higher-end desktop to e.g. handle x264 videos which was not the case (search for q6600 on videolan forum). And depressingly many cheaper CPUs of the time did it easily.


I owned one, it was a performant little chip. Developed my first multi core stuff with it.

I loved it, to be honest.


65nm tolerated a lot of voltage. Fun thing to overclock.


Really? I never owned one but even I remember the famous SLACR, I thought they were the hot item back then


It was "hot" but using one as a main desktop in 2007 was depressing due to abysmal single-core performance.


I was just about to comment on this, apparently all production capacity for hbm is tapped out until early 2026




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: