Hacker News new | past | comments | ask | show | jobs | submit login

Hm? L2 is per core. Always has been in a three+ layer architecture.

Zen has 512K L2 per core and 8M L3 per CCX (two CCX per dice). L3 is a victim cache iirc, unlike previous generations where the L3 was inclusive.

Intel usually went with a similar scheme in the last few years, where the L3 is partitioned into slices assigned to cores; accessing the local slice is faster than a non-local slice. Skylake-SP deviates from this (significantly), for better ... or worse.




L3 is a victim cache? Intels L4 is the victim cache for their inclusive L3. How does that work? And their L2 is basically a buffer between their L1 and L3.

That doesnt make sense to me. I can't find any good info on this.


Check AnandTech's Zen review and their recent Skylake-X review for the latest details.


Sorry, I meant L3 without leaving the CCX.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: