Hacker News new | past | comments | ask | show | jobs | submit login
Intel’s e-DRAM Shows Up in the Wild (chipworksrealchips.blogspot.com)
97 points by jacquesm on Dec 24, 2014 | hide | past | favorite | 17 comments



note - this is from February. those chips are in lots of places right now.


This is Crystalwell, right? Which is shipping in some laptops to power the embedded graphics.


What does this look like from software? Is there just 128 MB of RAM at the start of memory that's unusually fast, or something else? Do kernels know about it? What do they do with it?


I wonder if in the future we'll have motherboards with two cpu-like sockets, except the other has an e-dram only chip. I assume the performance is better than regular dram.


The point of eDRAM is to gain performance moving it on the same module or die, so I'm not sure it buys you anything to put it in a different socket, you still have to go over the bus to get it. Usually eDRAM's structure doesn't make it perform better than any other DRAM, it just makes it able to be put on die or on package.


I am not sure why you are being down voted. This is exactly right. Stacked memory is the future in order to enable extremely parallel ultra low latency access to memory.

Putting it over a bus to elsewhere on the motherboard defeats the purpose, and even in graphics cards where manufacturers have complete control and can run a ton of traces to the memory, they are moving towards stacked memory. I believe pascal, nvidias next GPU, will feature it unless the roadmap recently changed.

NUMA is about to get a lot more NUMA.


e = embedded.

My understanding is that performance is better mostly because the memory is so close to the CPU, just like traditional CPU cache levels.


Right. Individual DRAM cells are much slower than the SRAM cells used in the rest of a chip's cache system but they're also much denser which is why they're used for main memory. But IBM found that for large last level caches the shorter wires enabled by size savings of eDRAM were enough to offset the slower cells leading to a faster cache overall.


The future is stacked memory with DRAM stacked on top of the CPU die. No sockets to be seen anywhere.


Interesting heat dissipation challenges with vertically stacked CPU's. I wonder if there would be advantages to stacking alternating layers of CPU core, DRAM, CPU, DRAM...

For cooling purposes, is it feasible to put a solid-state Peltier cooler integrated onto the die? The first thing that comes to mind is the energy costs but I would think that it would be less than sending current through a fan motor.


Future is uniform SOC where some features are disabled depending on price.


By disabled, do you mean the same processor design being binned into varying qualities of chip? In that case, it seems much more cost effective, and environmentally friendly, to send out chips with bad sections disabled at a lower price rather than to chuck them out.


More like 'disabled by scoring a large 'X' with a laser across the die for those features that are to be disabled'.


Yes, but only for those chips who would be binned in the high-end and go unsold.

And you can just design a couple on-chip fuses that can be broken in early stage testing. No need for lasers.


I've been playing with the Renesas RZ/A1 line, which comes with three sizes of eDRAM on board: 3, 5, and 10MiB. The price differences between them are so large it sure seems like they're binning the parts.


You obviously are not familiar with the semiconductor industry. Binnig has nothing to do with price differences.


20-30 years ago, x87 math processors were on a different chip. L2 SRAM caches were also on a different chip.

7 years ago memory controllers were on a separate chip.

Maybe 20 years from now, CPU needs just power and optical links. Some for inter-CPU communication. Other optical links go to USB 5.0 peripherals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: