Hacker News new | past | comments | ask | show | jobs | submit login
How will memristors change everything? (highscalability.com)
158 points by zackham on May 5, 2010 | hide | past | favorite | 55 comments



My main concern with the hype around memristors is how quickly the technology will scale. Yes, they have great promise. But if technical problems make them 4 times bigger and 8 times slower than existing technology, nobody will adopt them. And without volume adoption, they won't have enough investment dollars to be able to exceed Moore's Law. Which means that we won't care about them until well after Moore's Law runs out of steam.

This is not a theoretical failure mode. It has happened before in the computer industry. Multiple times.

For example it is why Transmeta died. Their goal was to have a simple chip that was so fast that they could emulate the x86 faster than the x86 could run. They failed. However one of the design goals was less heat (because heat was a major scaling barrier), which translated into having an emulated x86 chip that with much lower power. Given that they had a simpler architecture and had already solved heat problems that were killing Intel and AMD they hoped to iterate faster, and eventually win. But the investment asymmetry was so huge that they couldn't execute on the plan. And Intel was able to reduce their power enough to undercut Transmeta on the niche they had found, and Transmeta couldn't survive. (Intel was aggressive because they understood the strategy and the potential. Transmeta was always going to be something that either wiped out the existing industry or died with a whimper, and Intel knew which outcome they'd prefer.)


HP has already fabricated chips with memristors that are smaller, faster, and lower power than Flash. The problem I see is write endurance. They match Flash in this area as well, but that's not good enough. If memristors are to be applied to anything but storage (e.g. DRAM, SRAM, logic, neural nets) they will need to be many orders of magnitude more durable.


If that were the case, then HP should already be selling memristor memories to compete with flash memories. Flash is already a huge market, HP does not have to wait until memristors can compete with RAM.


Give them a little time; the paper was just published last month! Commercial availability is expected in 2013 according to http://www.technologyreview.com/computing/25018/


These things take time. There are years and years between a single proof of concept working circuit and packaged product at Newegg, if it makes it there at all.


I think you mistakenly believe that memristors must replace transistors completely for us to harness their benefits. memristors will start off like any other technology: in the special application areas where their cost/benefit makes most sense.


Do you think their natural fit with FPGAs might be enough of a thin wedge to get them used at least somewhere? Or is the investment involved to get them performing better in even a field where they have a 10x or more advantage just too large?


One of the reasons that people are so excited about memristors is that they're supposed to be able to scale better than Flash. When you shrink Flash down too far, you run into problems like having the electrons tunnel out through defects in the insulator. The limit is supposed to be about the 20 nm node. And apparently memristors are supposed to be practical to manufacture quite a bit smaller than that. And if we get the manufacturing techniques right, we'll be able to layer memristors in a fairly straightforward way.


This is an excellent article. I agree that memristors will change everything.

I'm a bit perplexed about the timeframe, though. I'd like to think 5 years, but my gut says it's more like 12-15 years before it's all different. And it will be very different.


I agree about the timeframe. I think the issue is not when the technology will be ready, but when the money will be ready to embrace the disruption. So the tech will be ready in 5 years, but the products and paradigms wont make their way into the wild (outside of academia) until 6 or 7 years after that.


Yes it's not a tech issue, the issue goes much deeper than that. It will mean a major change in architectures and languages, and that's going to require some killer applications to gain adoption. I imagine the tech will show up and languish -- while the rest of us try to figure out how to get our heads around it.

Even after adoption, there will be major friction from the corporate and governmental sectors -- lots of money has been invested in doing things the old way. Early adopter consumers in 12-15 years. Rest of the world? Perhaps a good bit longer.


There's a presentation by R. Stanley Williams linked to in the article, and it is well worth watching.

http://www.youtube.com/watch?v=bKGhvKyjgLY#

It has some tidbits I didn't expect. Teaser: "one of the guys in my group has...built a compiler to compile C code using implication logic rather than NAND logic and, interestingly enough, when we play with that the compiled code we get is always more condensed...by about a factor of 3"


I the basic circuit componants are R, L, C and now M. I wonder, how one can build a transistor out of R,L,C ?


You can't just drop a memristor chip or RAM module into an existing system and have it work. It will take a system redesign.

If these truly are a viable storage system, building an interface that's reasonably easily adaptable to the current computing paradigm shouldn't be too difficult. Traditional HDDs and SDDs both play nice with SATA, for instance, despite using wildly disparate methods of storing bits.


Better yet, configure a small portion of the memristors as IMP CPU compute clusters to handle the work needed to adapt it.


With memristors you can decide if you want some block to be memory, a switching network, or logic. Williams claims that dynamically changing memristors between memory and logic operations constitutes a new computing paradigm enabling calculations to be performed in the same chips where data is stored, rather than in a specialized central processing unit. Quite a different picture than the Tower of Babel memory hierarchy that exists today.

That part is mind-blowing.

And I'm wondering, if this all works out, will the whole multi-core thing and all the trouble that comes with it (from a software standpoint) be pushed back for another decade?

And what implications would that have for programming languages? Seems like it would mean JavaScript wouldn't be so much worse than Erlang after all (cf. http://news.ycombinator.com/item?id=1304599 ).

(Yeah, I know, it's a little bit of a stretch.)


Actually, it's the other way around; it sounds like they are talking about having lots of processors literally physically spread amongst the data. That's sort of more Erlang-y, although it isn't actually Erlang-y either. Erlang pretty deeply assumes homogeneity of the computational units it is accessing. (It does allow you to start a process on a given node, and that's a start, but the VM's strategies for memory management and copying on a given node would have to change quite substantially to cope better. Or map one node to each processor-in-the-memory in which case I'd expect too much copying between nodes to occur without a lot of care taken by the programmer, sort of fighting the language the whole time. Erlang lets you control where processes run, but not directly control where data lives beyond that.) Basically this future seems to be NUMA writ large, and while an "Erlang-inspired" language may be the way forward, Erlang itself would not make this transition without substantial changes. (Might be feasible, though.)

But Javascript as it stands today is even worse off, as are most languages. We don't really have much that could cope with this right now in a clean way. (Anyone know a language that really handles NUMA well? And I do mean a language, not a library for C or something. Something slick, not something that merely "permits" working with it.)


Anyone know a language that really handles NUMA well?

Depending on what you mean by well, any language that offers fork() as a primitive does work. The trick is that this allows you to create 2 copies of a process that share little enough that a scheduler can safely move them from one machine to another. By contrast with threading you have the problem that the scheduler cannot know when it is safe to schedule two threads on distant CPUs.

Of course this only lets you scale embarrassingly parallel problems.

For another approach that could be made to work reasonably well, try Go. Its central idea is that you pass messages to the processing job, and not vice versa. Figuring out how to schedule threads is a complex task, but run-time scheduling heuristics should do a reasonable job of that for most problems. (Writing algorithms that reliably avoid having bottlenecks will be an interesting challenge.)

And a third approach to watch is Parallel Haskell. Because of the guarantees it offers, the opportunities to rewrite the program at run-time based on what is appropriate are extremely interesting.



I agree that there's no technical reason for artificially limiting yourself to a single (but really fast) "computational unit".

But I wonder whether it will be worth the trouble (at least to the majority) if you can basically get away with only worrying about one processor, and that one processor is no longer limited in its clock rate by the current leaking, massively heat dissipating MOSFETs we have today.


It's even more different than that.

It's a collection of nodes, any of which can be either CPU transistors, bitkeepers, or both, at any given time - and change with the problem space.

While we may be able to target it with Erlang or JavaScript or whathaveyou eventually, it's going to need a completely different kind of instruction set than what any CPU currently has.


That part is mind-blowing.

It sounds like version 0.1 of computronium.


It's exciting, for sure. Although, I do have to remain skeptical that it will happen as soon as the author hopes. These sorts of changes require so many companies, organizations, and people to change their tools across the entire spectrum of computing..low level hardware, compilers, and possibly programming languages. Then there will be tools to cross-compile existing software onto these new architectures, and those tools will undoubtedly have a host of bugs of their own.

Nevertheless, if and when this happens, it's going to be FUCKING AWESOME.


I've just created a google group to discuss implementing this and other game changing techs. Because if you are going to start from scratch with computers again, you want to correct the some other problems we have with current systems (e.g. security architectures)

http://groups.google.co.uk/group/fca-t

Spread the word (I was kinda waiting until I had some more content before I publicised, but this seemed like a good opportunity).


Too bad it can't happen how revolutionary technology used to be started: in a garage. Leaving corporations at this goal with only a quarterly focus will not heed well for the this task.


  And what implications would that have for programming languages?
Probably not that many, as programming languages abstract data manipulation and this new technology only requires/allows different ways to implement data representation and manipulation.

Now these differences in representation and manipulation are large, as you don't need to 'get' and 'store' data from the memory to the CPU anymore, as the memory is the CPU. Calculations could basically be implemented as moving data from one part of the memory to another.


so, like an FPGA?


I think more like what FPGAs could have been. FPGAs today are generally used as a cheap way to implement an application-specific circuit. Once they are programmed, they stay in a fixed configuration. It is theoretically possible for the configuration to change on the fly, but it is almost never done.

Memristors would implement physically what FPGAs could technically emulate using transistors. With this come the benefits of lower power consumption and the ability to keep state when powered down (which FPGAs can't do).

I think cellular automata like the Game of Life are a better mental model for memristor circuits (but I am probably hugely wrong here).


Glad to see i'm not the only one thinking about FPGAs, the thing that i like the most is that this will likely provide an easy way to load a circuit/cpu schematic on a bare memristors based component. If the computing paradigm don't change completely in the meanwhile (unlikely), being able to load/clone dynamically some x86 cores, gpu-like cores, etc... directly from a schematic description and use them (old-style system emulation) is in my opinion quite cool.


Positronic? (the brains of asimov's robots).

CPU with data is a bit like objects - except asynchronous, with true message passing. Like smalltalk or Erlang (or web services for that matter).

Brain images, with parts lighting up depending on how active they are, suggests that much of our brains aren't being used most of the time, but come online as needed. It's as if one part calls another part, except the "call" doesn't block. I'm not being very coherent here.


I hereby suggest that we pronounce these things as memri-stors rather than mem-ristors.


It may sound better, but you're slaughtering the root of the word, which is Resistor. Memristors are useful as a storage element but that's only one possible utility, not the fundamental of the device itself.


Absolutely. The comment was very much tongue-in-cheek. :)


Slaughtered or not, it's an awkward word to say "properly", and awkward words get changed.

I find Kilometer much easier to say the American way "kill-ah-meter", rather than the correct way of "kill-Oh-meter". I suspect people will pronounce these things in a way that isn't painful to the tongue, and memri-stor is that way.


Wow. When I asked this question I wish I had been as through and thinking critically as this guy.

It's so hard to guess what this means but I wonder if I should start writing a memristor VM just to see what could been done.

Even if they are here in 5 years I bet i'll bet longer than that before we really, truly know what to do with them.


Yes! A VM is exactly what we need right now for hackers to start wrapping their heads around the memristor paradigm. Even if it's only practical for very lightweight proofs-of-concept, it would be a major step for mindshare.


The theoretical density they give neglects some overhead. If you divide the area of a chip by the size of a transistor, you should get 100 billion transistors on a chip, but in fact you can only get about 1 billion. The rest is overhead: wiring, power, isolation, etc. Probably a similar overhead will apply to memristors.

Also, in the 5+ years it takes to make them practical and reliable, transistors will make progress too. So it's unfair to compare the theoretical density of a new technology to the achieved density of an existing technology.


For one thing, orthogonal persistence will become the norm. A computer that needs to boot up will become a quaint thing of the past. Capability systems might become widespread because of this, resulting in finer grained security throughout the entire computing world. Small, cheap, but voluminous and low power memory stores will allow for greatly increased capabilities for computerized tags of physical objects. Vernor Vinge's localizers or Bruce Schneier's Internet of Things could come about because of this technology.


I've lost the link but someone here posted a PDF on propagation networks a while ago. I can see that married to this technology somehow and become the ideal computing fabric.

I'll dig around a bit to see if I can find the link.


Yeah this is true functional programming writ large.

The data, which is usually huge in comparison to the code, moves around slowly and transparently. The programming goes to the data. I'm willing to be the next step would be embedded memristor blocks in a human body, providing multiple PetaBytes of both storage and processing capability (and perhaps with some sort of primitive neural interface)


> I'm willing to be the next step would be embedded memristor blocks in a human body,

I think that's more than just a little while off, that sort of enhancement, if it ever comes to pass.

Ok, finally found it, posted it here: http://news.ycombinator.com/item?id=1322135


Well we're already embedding RFIDs in people, so chips with limited radio potential and some rudimentary processing power are a done deal.

The question becomes how these embedded processors will change over time. I don't think it's too unreasonable to imagine the memristor tech being used and adoption rates going up. The neural interface, of course, was extremely speculative. But the rest of it seems trivial. At least to me.


My thoughts would be that heat would become an issue. RFID chips are much lower power.


Don't know.

Should be easy enough to do the math.

I would assume some sort of bio-friendly heat sink, perhaps a silver lace spread through the abdomen or something.

Fortunately getting rid of heat is already well understood.


Would it be worth buying HP stock? They could own the next generation of computers.


This allows multiple petabits of memory (1 petabit = 178TB) to be addressed in one square centimeter of space.

Given the sugar-cube reference earlier, and that nothing's 2D, much less tons of stacked circuits, I assume they mean one cubic centimeter. Taking that assumption, I point out this problem with that storage claim:

Heat. Good luck dissipating that little cube. Heat's one of the biggest reasons we don't have way higher power machines now, you just can't continually stack things together.

* goes back to reading * very interesting article, though. I'll have to watch the video too. Homework first, though :|


Memristors are completely powered off when not accessed, making heat scale entirely with access speed, not chip size.

Also, memristors are not stacked, they are deposited on a surface a layer at a time. This is a meaningful difference -- stacked chips are much harder to manufacture, and have much worse thermal conductivity than what is essentially a solid chunk of TiO2.


Even if things don't pan out as Stanley Williams says, there will be enough innovation atop the technology to make many things a reality.

I have known about the memristor from the beginning--before all the "hype"--as I've interned at HP Labs and I drool at the thought of their vision for intelligent systems with in-memory processing. I mentioned this in another post, but I predict the memristors will make computer vision possible. This means autonomous vehicles, better airport security, etc. Similarly, anything involving sensors and machine learning, will lead to unimaginable progress.

Nanotech is real folks. The question isn't "if?", it's "when?" It will go through many iterations, but it's now real.


Could someone explain the author's conversion of petabits to terabytes?

(1 petabit = 178TB)

http://www.wolframalpha.com/input/?i=petabits+to+terabytes


It is clearly an error.

However it should be noted that there is confusion floating around about those terms. The problem is that 2^10 is approximately 1000. Therefore sometimes people use factors of 1000 and other times people use 1024 in very similar contexts. This can lead to confusion and odd ratios.

What looks like it happened in this case is that someone consistently used factors of 1024 rather than 1000, found that a petabit is 128 terabytes (which technically should be called tebibytes), and then miscopied 128 as 178.


Wild guess, 1 petabit = 0.125 petabytes * 1024 = 128 TB, with a typo making the 2 a 7.


Typo in title (memristors)


Oops, thanks for the heads up, posted in a hurry this morning.

Nice recap of details I had read a bit of a few years ago... also, highscalability is a great blog that is likely of interest to many HNers.


Can anyone explain at a high level how these things work? Even the wikipedia article loses me fairly quickly. From what I understand, they were predicted to exist based on a theory about the relationship between capacitors, inductors, and resistors. What variables would be of interest in this relationship?


Imagine the security rift this will create between old-style cpus and new. Memristers would tear through current encryption like a knife through butter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: