Hacker News new | past | comments | ask | show | jobs | submit login
Samsung Starts Mass Producing Industry’s First 3D Vertical NAND Flash (samsung.com)
57 points by jsnk on Aug 6, 2013 | hide | past | favorite | 22 comments



Heat dissipation and manufacturing concerns aside, a 3D stacked architecture seems to be the most obvious thing to increase clock speeds. Light can only travel about 30cm every nanosecond: components clocked at 4GHz can't be more than 7.5cm apart from each other (I think -- and probably have to be quite a bit less for practical concerns). Packing everything vertically makes it easier to keep these propagation delays low and drive up clock speeds.

We seem to be in a race to the smallest transistor physically possible, to increase density on 2D chips. Tiny transistors generally leak more heat due to quantum tunnelling, right? So what if the industry moved toward much, much more efficient, larger transistors, layered hundreds or thousands of layers thick in a large 3D architecture? I think the only reason this isn't happening is because it's not really possible to manufacture that sort of chip without it taking a thousand times as long and being a thousand times as likely to fail QA -- but maybe this is a step in that direction.


Well there is quantum tunneling but there is also dennard scaling, so it goes both ways. Transistors need to be small but also non-leaky, so that's why everyone's toying around with exotic materials and structures.


not being a chip engineer, i wonder while 3D chip layer stacking isn't in the wide use. For high powered chips it is understandable because of heat dissipation requirements. Whereis for lower powered it may seem that additional layers of heat transfer material between working layers would do the trick. For CPUs the 3D stacking would decrease distances between various parts for example and would allow something like cluster (of low powered chips) in a cube.


[Disclaimer, the last time I stepped into a chip fab it was 1998]

There were (and may still be) several reasons that 3D was not a regular technology in use. The first was that connecting transistors "across" the chip was done using a metal layer (aluminium at first, later copper) and laying down a metal layer left forced a distance between it and the next silicon layer that you would end up bridging with poly-silicon (NP doped silicon so that it conducts electricity but not as well as metal does). So all of the cross layer connects that weren't metal were slow.

Heat dissipation was another issue for transistors between a 'top' and 'bottom' layer.

Varying feature height, so if transistors were 3 uM thick and resistors were 1uM thick then you put a transistor above a resistor its drain and source layers would be 1uM up from the plane in which the other transistor sits.

Process contamination, basically building a transistor on top of a surface of SIO2 was one thing, but building it on top of another transistor meant than your ion implanting step needed to have really accurate control of depth.

The MEMS revolution gave people a lot more tools that were accurate in depth (vs X & Y position) and regular feature patterns (as opposed to general purpose logic) lend themselves to better 3D planning.

One of Intel's big breakthroughs in 22nM chips was its tri-gate vertical transistor design [1] which allowed the gate area of the transistor to be vertical and thus have enough silicon to hold a reasonable number of electrons.

[1] http://newsroom.intel.com/docs/DOC-2032


> not being a chip engineer, i wonder while 3D chip layer stacking isn't in the wide use.

Short version: The process to make a 2D layer of transistors requires something like 20-30 masking steps, and many of those steps require significant heating. Adding a second plane of 2D transistors on top subjects all of the transistors underneath to more heating, so it is hard to build that structure on top without making the transistors underneath nonfunctional.


For low power chips, what advantages would there be for stacking? My thought is that low power chips probably aren't going for performance (since they are low power), so stacking for performance wouldn't really be buying them anything necessary. Maybe you could just get smaller form-factor low power chips.


half of motivation for 3D i see is the model of the brain, not the actual geometry of the brain, which is about 2D+ (don't remember the fractal dimension value of the brain), more a logical model of it where computational power comes from having number of connection an order of magnitude larger than number of connected neurons. Our current electronics has it backwards - number of connection is orders of magnitude less than number of connected elements. 3D seems like a way to build computational devices with increased number of connections (that though requires some work on software side as well to benefit from it)


Stacking is pretty expensive (especially TSVs), so it's kind of a last resort when cost is no object.


I guess its just the complicated production method? How does lithography work with this vertical stacking?


Pretty amazing hack to get around the size limitations of NAND flash feature size.


The article mentions 10nm process size, whereas I thought the current state-of-the-art was 20nm. Is flash memory an exception? And if so, does that also apply to DRAM? Or am I just confused?


Flash and DRAM have highly regular structures, which means they tend to be ahead of everything else in terms of feature size (and also, feature size isn't an exact measurement anyway).


The press-release said 10nm _class_ with a big asterisk next to it. I would be willing to bet that means something like 19nm.


>128 gigabit (Gb) [per chip]

This is the first I think I've seen size measured in gigabits, rather than gigabytes, of which there'd be 16, which is still impressive, but on amazon I can buy a 32GB sdcard for $20.


Flash chips are typically spec'd in terms of bits in the industry.

Things like SD cards are really small consumer electronics devices and are spec'd in terms of bytes.


This is incredible and may have far reaching implications for computing.

Hopefully, the cost ratios haven't disproportionately increased as well.


I wonder how long it'll take for Apple to violate Samsung's patents, when product bans will be proposed, and when those bans get overturned.


That's not funny. This isn't the market where Apple and Samsung compete, it's the market where Apple is a huge Samsung customer, where a few years ago Apple was responsible for an outright majority of Samsung's sales.


Samsung is a gigantic company, a South Korean Siemens. They are active in any number of markets, and Apple certainly didn't provide for a majority of their sales.


Read my comment again. I wasn't making any statements about the Samsung conglomerate as a whole. I was only discussing the flash memory market, and in that market, Apple was indeed responsible at one time for purchasing more than 50% of Samsung's production of flash memory (this was during the heyday of the iPod Nano and prior to mainstream SATA SSDs).


Rumour has it Apple have been trying to move away from Samsung for components, though.


I doubt the patents for this are going to end up in a standard that forces Samsung to license them under FRAND terms.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: