Hacker News new | past | comments | ask | show | jobs | submit login
I.B.M. Reports Nanotube Chip Breakthrough (nytimes.com)
204 points by sew on Oct 28, 2012 | hide | past | favorite | 30 comments



Articles like this used to excite me. But now, they are just sort of depressing.

Would it be good to have denser transistor counts? Sure. There are lots of benefits for hardware. Denser transistors can mean (possibly) reduced power consumption from smaller chips. Manufacturing costs are reduced because you get more chips from the same materials. Device production costs are reduced because you can combine functions that used to take two chips into a single chip. Overall devices can be smaller.

But from a software perspective, its a limited blessing. To be sure, there are benefits for cloud computing applications and other "embarrassingly parallel" applications. These chips might be great in data centers, which are power sensitive. And being able to do more things at once can be good, even if its at the same speed - Google can maybe crawl the web more frequently and video games might get improved graphics.

But the thing is, we already have nearly unlimited computing power at our disposal. On a whim, I could spin up a hundred thousand computers on Amazon Web Services to do some desired computation (which would cost a mere $700/hr at the current spot prices). My laptop has a 192 core GPU which, by itself, is more powerful than most super computers that existed up to the mid-1990s.

What we really need are software advances to exploit this hardware. For decades, there has been essentially no progress whatever in software that can take advantage of parallel computing. What techniques we have are either brittle (threads) or forfeit most of the hardware's power (multiple processes combined with relatively slow inter-process communication). Software is in such a backward state that people are amazed when the home-screen icons on an iphone scroll smoothly from page to page.


“we already have nearly unlimited computing power at our disposal” I have to disagree. To tackle hard problems I would be glad to have a billion times more computational power. We are doing semantic processing of texts. For one customer we have a million texts, and processing one takes about one second. To have the computer automatically generate rules (Genetic Programming) we would like to have this: the computer generates a rule, processes the million texts, and checks if that rule improved the overall understanding. Currently one million processing seconds are required. But if we could do this on one million cores, we could generate one rule per second. The thing is: an evolutionary system will generate millions and millions of rules. So, even if we had a million times more computational power, it would still require several months to finish a run. Renting such capacity from Amazon & Co. would cost easily 5-10 millions. With a billion times more computational power however this would fall in the range of perhaps a few thousand dollars.

Also, it would be useful, if mobile phones would feature such a computational power, so that strong AI software with above human intelligence could run on it. Such software could technically run on a C64 if enough memory were available, but answers would take quite a long time.

So, I really hope that we will see maaany trillion times more computational power in this century.


I used to think this way, but it's better to give it another perspective. IBM (or their research group) specializes in Hardware and they are making good progress. They can't do both in the same time.

Someone else has to do the Software hacking. But if there isn't someone doing it (or at least meeting your expectations), then it doesn't mean IBM should stop or change activity.

Any progress we make is a win for us and for the whole humanity.


What you say may be true for consumer or even "web scale" technology, but not for high performance computing (of the science type).

In HPC, people can always use more computing power to improve their simulations or numerical solvers, and we are nowhere close to hitting a limit in usefulness (think e.g. faster/more detailed simulation of large scale structures, say airflow around aircrafts, n-body systems, simulation of bigger quantum systems in physics ...). While massive parallelization is of course used wherever possible, it's sometimes more difficult to come up with algorithms that scale, and sometimes you simply have to much dependency between your data to parallelize beyound a certain point. This is the place where this stuff could be used first, before it's ready to "trickle down" to consumer electronics.


Increasingly larger problems can be solved in the same time period. Processing is getting wider, but staying a similar speed... except IO - which has gone from 40MB/s to 1000MB/s for fairly cheap hardware. The network has also sped up in a major way, with terabits/s possible. Branch prediction allows us to follow multiple possible serial paths, and the wider our processors the more branches we can follow. Software advances have made it trivial for even beginner programmers to process many events in parallel. Automatic vectorizing compilers have been available for over a decade now. Transactional memory is becoming mainstream, with compiler support and even hardware support. Massive libraries of highly optimized maths and scientific routines are being developed by 1000s of people at once over the internet and made available for free. I don't think it's hard to use up all that power. Even on the cutting edge, gaining a 10x or 100x speed up is not unheard of. Problems are becoming more and more about the functionality required, and the vision, rather than being limited by the processing speed and power available.

tldr; we are ok, and we're going to be ok.


The density numbers in the article are only interesting because they mean that nanotube transistors are practical. A 90nm process node is what we were using for Athalon 64s and Pentium 4s, it won't let you put more cores on a chip than you have now. Maybe someday nanotubes will shrink that small, but that wasn't what the article was about.

The amazing thing about nanotubes is that they have an electron mobility of 100,000 cm2/(V·s) at room temperature, the highest of anything you could think about making a transistor out of. By contrast, silicon has an electron mobility of only 1400 cm2/ (V·s). Since the drive current coming out of transistor is roughly proportional to electron mobilty (or equally high hole mobility) you'd expect that you'd expect that a nanotube based transistor would be able to switch 70 times as fast as a silicon based one. Or a quarter of a terrahertz, in other words. Now, speed of light delays at 90nm will probably be pretty significant at that point, but we should still expect that moving to nanotubes to be a huge win for single threaded performance at the expense of the number of threads available.


> ... a nanotube based transistor would be able to switch 70 times as fast as a silicon based one. Or a quarter of a terrahertz, in other words.

A transistor switching at 250GHz would not be 70 times faster than a silicon transistor. In fact, it would only be a little over 2x faster. Our best production quality silicon transistors are well over 100GHz now.

The clock frequency of a CPU is not the speed that a single transistor in that CPU switches. It's the speed that the longest path of transistors inside a single pipeline stage in that CPU switches. In contemporary CPUs, these paths are ~20 FO4¹ long.

(¹ since the speed of the transistors depend on the load put on them, you need to normalize for that. http://en.wikipedia.org/wiki/FO4 )


You're entirely right that I misspoke there. I meant that the clockspeed for similarly architectured devices would go up to a quarter terrahertz, but I managed to imply something entirely different.


No, it really wouldn't. Clock speed is as much restricted by light speed delays as it is by transistor switching speed.


Ok, I'm going to disagree with you here. Speed of light delays cause latency when you're talking about large structures like caches, but they're not even the biggest factor in L3 with current processes. If we were clocking things as fast as nanotubes allow then we probably would find speed of light delays dominating latencies, but increased latencies will only eat a proportion of the gains from higher clock speed, and that's what we have branch predictors, prefetchers, and deep OoO engines for.

Unless you were talking about clock skew between the exit points of your h-tree?


IBM is fantastic at using their research for PR, but it usually takes a very long time until these breakthrough announcements show up in products. The challenges on the road to competing with established tech for a completely new technology like this are formidable.

An announcement like this is great but it is no reason to get overly excited just yet. Many other technologies have been proposed and have either disappeared or have found employment in niches (GaAs for instance), for now Silicium still holds a formidable edge when it comes to the most important measure of all: economy.


This particular breakthrough product probably wouldn't start appearing in chips for at least 8 or 10 years, though. It's fine to get excited! Just don't think we're getting nanotubes in our phones next year. Changing materials is a huuuge step.


Right. I expect people to switch over to other materials only after improvements using silicon stop.


Even then it will have to make economic sense to switch. After all the equation is not x > y but $n using tech x buys more computing power than $n using tech y.



material scientist here. I've worked on carbon nanotubes (cnt if you want to sound cool) for a few years. Looking at them under the electron microscope, I've always wondered how in the hell we'd ever line them up without having to recourse to nano-tweezers (which of course is impractical if you want to scale to a few billion switches). Looks like IBM finally cracked a decent technique (though I suppose the device yield isn't that great), and it hopefully will be a path toward production. What would be really cool is if we can find economically-viable application for carbon nanotubes outside of semi-conductors. The biggest problem right now is their cost (~$400/gram for good purity [for reference: 20X the price of gold]) and the fact that platinum is used as a destructive catalyst to make them.


Let me be nitpicky: the price of gold has gone up recently, it's now over 50$/g.

http://www.wolframalpha.com/input/?i=%28400+%24%2Fg%29+%2F%2...


Ah, good catch! I remember a while back comparing CNT prices per gram versus gold and it was something like $20 a gram for gold. Must have been a low then. Thanks for catching it.


Well, right now their yield is only 70% of the transistors, so effectively 0 for chips. Improving that is probably the biggest research breakthrough still remaining.


TL;DR: IBM found a way to place nanotubes on silicon chips with some precision. Not yet perfect, but it's the best we got so far.


Any know what the theoretical frequency of CPU made of nanotubes would be?


The article claims that the transistors were 5x faster than existing transistors. And that is not the top possible performance.


If this ever turns into a product, I hope they choose a more efficient design than x64. I'm assuming that ARM would be a good choice.


They have they Power series of chips so it would be really weird if they were to choose x86 (even x86_64)


It looks like Warren Buffett's first major investment in a technology company is shaping up nicely.


They have long since shrunk the switches to less than a wavelength of light

Snort.


What? Transistor sizes hit UV wavelengths around 1995, during the Pentium era. That's a reasonable statement to be making to provide context.


And there are constructive ways to provide that context, like how you did. "Snort" does not qualify.


I think "snort" was a joke so that you wouldn't get angry :-)


So "visible light" would have been more accurate, but probably assumed by most readers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: