Hacker News new | past | comments | ask | show | jobs | submit login
IBM scientists say radical new ‘in-memory’ architecture will speed up computers (kurzweilai.net)
299 points by breck on Oct 27, 2017 | hide | past | favorite | 133 comments



This is the promise of memristors. Despite innumerable articles being written about neuromorphic architectures like they'll be something miraculous, this ability to change from functioning as a bit of memory to being a bunch of functional logic on the fly at the speed of a memory read? That's going to be crazy. It will open up possibilities that we probably can't even imagine right now.

I've never understood why people don't get more excited about memristors. They could replace basically everything. Assuming someone can master their manufacture, they should be more successful than transistors. Of course, I'm still waiting to be able to buy a 2000 ppi display like IBM's R&D announced creating back in the late 1990s or so... so I guess I'd best not hold my breath.


> I've never understood why people don't get more excited about memristors.

personally, mostly because i've been seeing articles about how they're just about to totally upend computing for the last ten years


I've noticed that overly excited articles are tend to be nonsense until they're finally true.

I know that sounds like tautological nonsense, but the trick comes down to predicting when these things will become real rather than if they become real. From my experience, if you want a good indication, look at price/performance trends for a particular technology. If that technology has no price/performance data, look at the price/performance data of its components or related technologies.

If you can't find price/performance data, then don't get too excited yet.


> I've noticed that overly excited articles are tend to be nonsense until they're finally true.

The funny thing is that if you reread Wired's article about Push, which they've been made fun of for ever since writing it, the only thing that hasn't come true is the technologies they predicted would power it. But other than that it's a perfect description of Reddit, Facebook, YouTube, etc.

https://www.wired.com/1997/03/ff-push/


Funny, but I was thinking pretty much the same thing about push a day or so back myself.


10 years? I've been seeing them hyped as the "next big thing" since the 90s. HP keeps pumping out press releases every so often.


How? The first memristor was created in 2008.


The concept goes back to at least '71

http://ieeexplore.ieee.org/document/1083337/


Correct on concept, but up until 2008 nobody produced one reliably.

You can find references and hints to them all the way back to maxwell's day. They tended to be regarded as anomalous circuits back then however.


The fact that nobody has produced a fusion generator, does not mean that the amount of articles claiming that fusion is 'just around the corner' is any less.


Well, if they were 10 years away in 2008, that should mean the first 90% is done this year, then it'll be another 9 years for the remaining 10%? So 2026? ;-)


It took at least that long to go from the theorized device to the first prototype.


..."hyped as"...


Pretty sure I first read about them in a PC World magazine I got at the airport on a family vacation in like 2003


It's been a dream since the term was invented in 1971. HP Lab's big announcement that they had cracked memristors and they were right around the corner was in 2008 (which is quite nearly a decade ago now).


As far as I know I submitted the first ever article about memristors to HN, which was this in 2008:

https://news.ycombinator.com/item?id=177865


HN has only been around since 2007


Ambiguous language. I thank what was being implied is that it was the first occurrence of that type of article on HN, not that the article was the first ever about memristers and was posted to HN.


Not really that relevant to the ongoing point of the age of memristor articles then.


There was a world of media before HN, you know. Just saying.


I don't think he was implying otherwise.


Doubtful, the first purpose built memristor existed in 2008. Though to be honest we've seen them since the 1800's as oddities in certain electrical circuits.

They've been theorized since 1971 mathmatically but until fairly recently haven't been a thing.

It is the fourth fundamental circuit, give it a while before discounting it. And be happy you're alive when the fourth fundamental circuit was discovered and made real.


Feynman also ranted about how much processing within the memory would speed computing in the 80's. He challenged those "computer scientists" to figure it out :P

.... this lecture: http://cse.unl.edu/~seth/434/Web%20Links%20and%20Docs/Feynma...


Very insightful post. Thanks for that.

I skimmed the paper and enjoyed the illustrations, and the photos at the end :).

So Memristors are about reversible gates instead of non-reversible NAND gates?


I think what he implied was that more reliable calculations would come from reversible gates, though they'd ultimately be slower. The speed being one of the main reasons they weren't used in processor design. One of the solutions for that problem was the parallel design that could compensate for the slowdown of individual calculations by the ability to run many at the same time, and of course the increased rate of transmission by performing the calculations right within the memory instead of the added latency of fetching and storing values every values and performing every calculation synchronously.

So I think it's more that memristors would be a solution that allows the use of reversible gates making for more reliable processors. If anybody else has any better insight, fire away.


I think maybe the relevant bit - kind of a job to find in 16 pages:

>. Another plan is to distribute very large numbers of very simple central processors all over the memory. Each one deals with just a small part of the memory and there is an elaborate system of interconnections between them. An example of such a machine is the Connection Machine made at M.I.T. It has 64,000 processors and a system of routing in which every 16 can talk to any other 16 and thus 4000 routing connection possibilities.


That's how I heard of them referred to since at least the early 90's, as processor-in-memory. I didn't know memristor was referring to the same concept.


I still remember something like 25 years ago the revolution of clockless circuits published in SciAm..


Try 45 years.


I can't talk for other people, but I personally don't try to get over-excited over anything unless I see a working proof of concept where the advantages are clear. It's kinda unfortunate, because there's great work being done in research that I should probably feel much more excited about, but I'm continuously bombarded with new discoveries that I never hear about again.


I personally don't try to get over-excited over anything unless I see a working proof of concept where the advantages are clear.

Those are the two things you need to look for. No proof of concept? Well, talk is cheap! Have a proof of concept, but the advantages aren't clear? Again, talk is cheap. Refine it until the advantages are demonstrable!


That's fine but it is a very conservative approach. I also can't talk for other people, however I personally get excited about new things without a working proof yet and where the advantages aren't clear. It makes for a lot of disappointment and isn't for the faint of heart. I'd rather see the future as it happens, false starts and all.

So the bleeding edge is where I want to be.


Transistors provide something that resistors, capacitors, inductors and memristors don't, and thus they can't be replaced. That thing is gain, ie, amplification (of either voltage or current). All the others are lossy, so if you imagine some building block, then cascade N of them in a row, eventually the signal will become too weak (at some N) to be useful.


Do you remember when IBM sold off their hard drive research division?

I was convinced this meant they knew memristors were going to kill hard drives soon and they were profit taking before killing them off.

That’s been more than ten years now. I am still waiting, but made other plans.


In all fairness, the hd industry as a whole has shrunk considerably since. Same for servers and PCs. I think they made the right decision, but I'm not sure I'd go as far as to say they had a good strategy for what they would do once they got rid of these product lines.


They sold it when hard drives became a commodity item - generally commodity items don't get major R&D attached to them.


There's still plenty of R&D being done with storage: https://www.anandtech.com/show/11925/western-digital-stuns-s...


Well, first off, their endurance is currently not even good enough to replace DRAM, let alone replacing critical path transistors. They also take significantly more power to switch and are much slower. They also tend to be highly unreliable (they even have variable write time!) and hard to manufacture. Really, the only useful advantage they do have is their theoretically incredible density, but even most of this comes from the possibility of 3D stacking (also possible with transistors) and from the possibility of a crossbar memory, but crossbars have epic sneak path issues.


>I've never understood why people don't get more excited about memristors

For the same reason they aren't excited about flying cars?

Because they wait until something is actually delivered that in a form they can use?


> I'm still waiting to be able to buy a 2000 ppi display like IBM's R&D announced creating back in the late 1990s or so...

You're not thinking of this and mistakenly adding an extra 0, are you? https://en.wikipedia.org/wiki/IBM_T220/T221_LCD_monitors#His...


They were right. Retina displays aren’t even the high PPI ones anymore. They are the equivalent of second generation laser printers.


> They were right.

Who was right about what?

> They are the equivalent of second generation laser printers.

What does this even mean?


Roentgen and Retina displays are in the neighborhood of 200 ppi, which is about when laser printers became 'clear' and legible, which was the thesis behind Roentgen.

But now you've got smart phones pushing in the region of 400 PPI. It's smoother, and make Apple displays look like laggards. But they're beyond the point where most eyes notice the difference without magnification.


The PPI required for the "Retina" effect is largely dependent on viewing distance, which is why Apple vary the PPI according to the device, they aren't a blanket ~200 PPI as you suggest.

The Apple Watch and most iPhones (including the original Retina display on the iPhone 4) are around 300-400 PPI, with the X being 458. Only Apple's "Retina" laptop/iMac displays have been around 220, which normally looks great given the increase in the viewing distance.

Wikipedia has a pretty nice breakdown of the PPI of all Retina displays Apple has shipped:

https://en.wikipedia.org/wiki/Retina_Display


Higher dpi maybe required for those kids these days whom hold their phones a few centimeters / inches from their eyes. I call it sera rubens myopia, Latin for glowing myopia.


But now you've got smart phones pushing in the region of 400 PPI. It's smoother, and make Apple displays look like laggards.

I think Apple displays look completely fine (much better than average, in fact, since they focus so much on color accuracy).

The ridiculous Android PPI arms race is one reason why iPhones are way ahead in all performance benchmarks. What's the point of all those extra pixels you can't even see if the GPU can barely manage to fill them?


There is a lot more to screen quality than PPI. Color fidelity, refresh rates, etc. Retina was great when other displays weren't, now that most devices have retina level displays going beyond that level isn't adding as much as making sure your color reproduction is accurate and your device can update the screen in a rate fast enough that animations don't judder.


I will say I'm perfectly happy with the PPI on Apple devices. If I seemed to be condoning the people chasing numbers to increase sales, that wasn't my intention.

I do joke though that some day Apple will introduce iGlasses, which will be AR glasses with electronically adjustable refractive indexes. When it detects a smartphone in front of your field of view it will zoom in 2x and do image stabilization so you can see your 500 ppi screen.

Like a high tech version of the computers in Brazil...


1G started in 1976 with the IBM 3800 at 240 dpi.

HP’s first 2G laser (LaserJet w/ Canon CX engine) was 300 dpi in 1984.

Also I had a 600 dpi LJ 4 which was released in 1992.


Thanks for clarifying the laser printer thing. The connection was super unclear. Now who was right, and what were they right about?


this ability to change from functioning as a bit of memory to being a bunch of functional logic on the fly at the speed of a memory read? That's going to be crazy. It will open up possibilities that we probably can't even imagine right now.

There might be clever ways to make a particular circuit dual purpose or something (Storage element that can do math on itself? A math pipeline that doesn't need pipeline flops?)

But arbitrary reprogrammability is already here. It's called an FPGA, and it sees only niche use, not in the least because developing a bitstream to program it with is a huge chunk of work. So the days of a chip that constantly morphs like a chameleon to unrecognizable new forms is probably a long way in the future.

I like to remind myself the industry is called VLSI- very large scale integration. In other words, the job is not about transistors. More than anything else, it's about managing the complexity of billions of transistors. Digital CMOS is a great example; transistors can already do so much more than CMOS, but it's a massively simplifying design scheme.


FPGA reprogramming is slow. It's like saying we can make more oil because all we have to do is wait a few hundreds million years for all our plant matter to rot. And correct me if I'm wrong, but can you even reprogram an FPGA while it is actively being used? I don't believe so. Much less reprogram it from the gates which are active.


Obviously FPGA is not as slick & fast as a memristor chip could maybe be, but if the economics of a morphic processor made sense wouldn't you expect to see FPGA coprocessors and a library of utility bitstreams to be dynamically loaded as needed?

Of course, the reason we don't see that is the economics aren't great, the performance boost isn't either, and developing those utility bitstreams is a herculean task by itself.

The real world incarnation of all this I would argue is the SoC. Silicon is cheap enough you don't make one mighty morphic chip; you make several dedicated IP's and put them all on one chip.


> And correct me if I'm wrong, but can you even reprogram an FPGA while it is actively being used?

You're wrong, this is called partial reconfiguration.


Can you reprogram the part of an FPGA that is actively being used?


The answer is no, but the sad part is, there's no real reason why not.

An FPGA isn't really a "gate array," but a huge ensemble of small lookup tables that are made with what amounts to very fast static RAM. If major FPGA manufacturers didn't guard their bitfile formats with their lives, it's possible that entirely new fields of research would have emerged to find new ways to take advantage of the existing hardware in these chips, much like what GPGPU researchers have done with graphics cards.


Hmmmm.

I wonder how expensive it would be to make a really really tiny FPGA with a few tens of cells in it.

As in, I realize chip design is expensive and crazy complicated, but small runs are only around $5k or so. I'm wondering if FPGAs manufacturing has similar costs - or at least similar-ish enough to be potentially interesting.

If you could go from zero to tiny prototype FPGA with 5 or 6 figures, you might just be able to pull some VC money out of thin air to kickstart with. Or you might not even need to go in the VC direction, there's bound to be a group of people out there with collectively enough between them to kickstart something like this. I'm uncertain if Kickstarter would be a horrible disaster.

With prototypes made, you'd just need to send a few of the prototypes out to some smart PhDs and hardware hackers, get them to show what's possible, and that could be a viable way to get further investment and complete the bootstrap cycle.

Obviously the point of all of this is to make a chip that's as open as reasonably possible. Physical chip manufacturing is likely to be NDA'd (for example the SRAM would probably be supplied as a reference design from the foundry) but the "how to poke this with software" could be super open. Would be fun.


Here are a couple of articles about a homebrew FPGA (only one logic cell, but it's step in your direction).

https://hackaday.com/2012/09/20/homebrew-fpgas/

http://blog.notdot.net/2012/10/Build-your-own-FPGA


ooo, interesting. Thanks for this!


> If major FPGA manufacturers didn't guard their bitfile formats with their lives[...]

Why do they?


I should say they don't have real reasons, but they have plenty of weak ones. They see their toolchains as part of their competitive advantage, which is weird because FPGA toolchains are some of the most annoying software applications around. The market as a whole could only benefit from additional third party involvement.

Especially at the high end, the manufacturers commonly introduce new hardware constructs -- high-speed transceivers, clock generators, memory controllers, that kind of thing -- and they don't really want to document and disclose the inner workings of these components at a level that would allow third parties to write tools for them. It could potentially take a non-trivial amount of documentation and customer support work. Meanwhile they want to preserve their trade secrets, and they also don't want to attract infringement suits from either patent trolls or legitimate competitors. Disclosure of bitfile formats would open those particular kimonos a bit too wide for comfort. This is largely why Linux users can only get GPU support via binary blobs.

At the low end, some good reverse-engineering progress has been made by people like Clifford Wolf on the Lattice parts. This work is certainly important, but as long as there's reasonably rapid technological progress on the hardware side, reverse engineering is never going to be enough to sustain a healthy third-party tools industry.

It's a real shame. Imagine how far we'd (not) have come if Intel and Motorola had walled off their instruction sets...


Well with most claims that generalize computing power, such as the title of this, it's mostly aimed at something more specialized.

"The researchers believe this new prototype technology will enable ultra-dense, low-power, and massively parallel computing systems that are especially useful for AI applications."

So it seems that would benefit AI/Machine Learning more than GP processing, from that statement.


I'm not so sure. CPUs are stupidly fast, now - the bottleneck is virtually always the time it takes to move data in and out of memory. What if, instead of trying to bring the data to the calculation, you brought the calculation to the data? You could speed up everything.

Fun to think about: in this scenario, the main job of an "optimizing compiler" would be to organize things spatially so that, as a calculation moved around in memory, it always found itself right next to the data it needed, when it needed it.


Computational memory can't speed up 'everything' because most software doesn't benefit from single operations that span a large chunk of data (AKA SIMD ops, as on GPUs). This technique will always improve only a small select fraction of present and future software, like games/graphics, video and voice processing, and general purpose vector operations (like tensor ops in deep learning).


Or programs are not written to use computational memory because there isn't any for them to use.

Besides, computational memory is not restricted to SIMD. It's limited into local memory, but not at synchronized execution.


How many programs can even manage to use two full-featured cores in parallel, with everything made as easy as possible? How many even try to do calculations on a GPU?

I'm not holding my breath for more than 1% of programs to use this. Or if it gets built into GPUs, then 1% when possibly excluding rendering code.


How many programs need to? The ten year old laptop I'm writing this comment on is fast enough for everything I want to do. Even my Raspberry Pi is fast enough for almost everything I want to do.


There is no reason whatsoever that this would be limited to SIMD operations. They will be easy to do because you just spam a certain set of gates out in an array, but you could be laying down completely novel, or programatically determined, gates instead.


This idea has come around many times, all the way back to the ILLIAC IV, Thinking Machines, and the NCube. The only commercial success is GPUs, which really do have elements with both compute and memory crunching away in parallel. Other pure-crunch appilcations have been found for GPUs and things like them. There are now GPU-like compute units for servers, although in many cases they're still GPUs, with frame buffers and video hardware that will never generate an image.


I think one could make a case for NUMA systems being a dilution if this concept. But my recollection of the Illiac IV was pretty much what I’d call a NUMA system.


Brings to mind images of Conway's Game of Life, with computations moving about a grid of memory as they create and destroy new bits of data.


And that brings to mind http://www.conwaylife.com/w/index.php?title=HashLife, an example (of many) of an algorithmic improvement that gives a way higher speed up than parallellizing and throwing hardware at a problem.


Hashlife is a brilliant algorithm. For anyone not familiar (and vaguely interested in Conway's GOL), highly recommend reading up about it. I found it mildly mindblowing :)



It makes perfect sense, because this is already how good programmers look at the task, especially considering the penalties for memory access on secondary devices.


AI stuff would be easy to implement on it, but general computation will almost certainly benefit just as much once research is done. It's a fundamentally different architecture when your assembly language can include instructions that essentially re-wire the FPGA which is your processor/storage. That makes possible entirely new classes of algorithms.


The researchers are those focused in AI applications and as such I expect them to focus advertising that. This technology has far more potential than just that, right down into simple stuff like realtime waveform or spectrographic analysis. Being able to store and process on the exact same 'silicon' without having to run out a few inches to retrieve data will boost latency-sensitive applications.


That's why. Memristor articles were all over the news for a bit, but have died down as either companies realized they're too hard to manufacturer or companies have quietly plodded toward making them competitive.


"this ability to change from functioning as a bit of memory to being a bunch of functional logic on the fly at the speed of a memory read?"

My excitement is tempered by considering the areal demands on the silicon for the putative "smart memory". Suppose, just for the sake of argument, you want your smart memory to be able to take a 4K block of 64-bit integers and add them together. It happens incredible quickly, sure, though you'd have to get an expert to tell me what the fanin can be to guess how many cycles this will take. But you're now looking at massively more silicon to arrange all that. And adding blocks of numbers isn't really that interesting, we really want to speed up matrix math. Assuming hard-coded matrix sizes, that's a whackload of silicon per operation you want to support; it's at least another logarithmic whackload factor to support a selection of matrix sizes and yet more again to support arbitrary sizes. In general, it's a whackload of silicon for anything you'd want to do, and the more flexible you make your "active memory" the more whackloads of silicon you're putting in there.

It may sound weird to describe a single addition set of gates as a "whackload", but remember you need to lay at least one down for each thing you want to support. If you want to be able to do these operations from arbitrary locations in RAM, it means every single RAM cell is going to need its own gates that implement every single "smart" feature you want to offer. Even just control silicon for the neural nets is going to add up. (It may not be doing digital stuff and it may be easier to mentally overlook, but it's certainly going to be some sort of silicon-per-cell.)

Even if you were somehow handed a tech stack that worked this way already, you'd find yourself pressured to head back to the architecture we already have, because the first thing you'd want to do is take all that custom silicon and isolate it to a single computation unit on the chip that the rest of the chip feeds, because then you've got a lot more space for RAM, and you can implement a lot more functionality without paying per memory cell. And with that, you come perilously close to what is already a GPU today. All that silicon dedicated to the tasks you aren't doing right now is silicon you're going to want to be memory instead, and anyone making the smart memory is going to have a hard time competing with people who offer an order of magnitude or three more RAM for the same price and tell you to just throw more CPUs/GPUs at the problem.

RAM that persists without power is much more interesting than smart memory.


Yeah, this seems like the exact same problems that we already have with FPGAs.

1. They're really hard to program for in a way that is easy to understand and scale.

2. They eat a ton of power. Heat = power = max speed. If you can't make it better then today existing ASICs are still going to be used.

The intersection of ASIC + reconfigurable serial processes(aka sequential programming) strikes a really sweet point between power and flexibility that I think is going to be hard to unseat.

I think FPGAs are incredible but if this were true I think we'd already see them taking over the world.


> 2.

so you don't see the potential combination of this and that?


No, you don't need more gates in each memory cell to support each possible operation. The memory cell turns into gates. They are the same thing, just in a different configuration. And yes, it would be a ton of silicon, likely more than you're imagining. It would subsume all CPU silicon, all cache, all main memory, all mass storage. Into one fabric which at any point can either store data, or compute. And yes, it persists without power too.


You've jumped to the idea that somehow this research makes microarchitecture disappear. It won't.


Maybe for the same reason as people don't get more excited about quantum computers -- they can replace basically everything as well, yet we don't see them around.


Quantum computers require both very specific configuration for each problem, and cryogenic cooling. Both of those issues mean there's no reason to suspect they will find more than use as a daughtercard any time soon. Memristors, on the other hand, could replace CPUs, RAM, SSDs, bulk storage, FPGAs with one consistent lattice, a totally unified memory architecture, and an economy of scale that I don't believe has ever existed before.

And that's BEFORE you do any of the exotic research of moving the computing elements around during operation. Just imagine if there was no memory hierarchy, where accessing a byte in mass storage is as fast and as easy as accessing a register. Quantum computers can't replace commodity hardware and there's no reason to think they will. Memristors, on the other hand, are one of the fundamental circuit elements and are destined to become as widespread as resistors, capacitors, and transistors.


The speed of light dictates that there will always be a memory hierachy. A nanosecond is just a couple of inches of wire [1]. Communication overhead and heat issues are what ultimately limits computational power [2].

[1] http://www.youtube.com/watch?v=JEpsKnWZrJ8

[2] https://en.wikipedia.org/wiki/Limits_of_computation


If you open the box their wave state collapses and they are either excited or bored. :-)

(ok that was a long way to make a Schrödinger joke) There are a lot of people who are very excited about Quantum computers and working on them, as they get more 'real' people on the fence move over, etc. I watched this on a smaller scale with 3D printers, as only a few people were excited about them and then as they demonstrated more and more interesting things the wave of excited people expanded outward from that core.


And of course, people do get excited about quantum computers...millions and millions of $ a year excited.

But yeah, their mostly non-existance is exactly why people aren't more existed.


I've never understood why people don't get more excited about memristors.

Probably because they have been promising them too long. I remember watching a video lecture in 2008 about how IBM was on the verge of revolutionizing everything with them.

Meanwhile in the same timespan GPU/DNN have upended the computing landscape.


I thought memristors is RAM that doesn't lose state when losing electricity.

Is this article about memristors? That wasn't clear. Can memristors perform computations?


Memristors are any device whose resistance changes in response to the voltage that's gone across it in a predictable and reversible way. The phase-change memory they describe is one type. Memristors can form any logical gate. They are essentially resistors whose value can be changed dynamically and persistently. Rather than having them store data directly (which you can do, sure, and we'll probably see devices doing that soon) they can form logic gates in much the same way transistors do - except they can be changed in configuration while running. They can form a set of gates that function as a traditional memory latch one nanosecond, then the next be part of an adder. It's like if re-programming was so profoundly free on an FPGA as to be exactly as burdensome as performing an AND.


That sounds like the optimization problems with a VLIW architecture taken to a whole new level.


I'm not an expert on this, but I think you might be thinking of nvram/non volatile RAM, "memristor" is a portmanteau of memory and transistor, so I'd expect that they have properties of both.


Not quite. memristors are resistors with a memory.

Specifically as current is run across it, it builds up more and more resistance. Then as the current is reversed (the plus and minus ends of the resistor is switched) this resistance will go back down.

Thus it can be used as non-volatile memory in the sense that certain tiers of resistance can be designated as a bit.

Note that as it is fundamentally an analog component one can potentially also used this property to emulate a neuron.


Thanks very much for the correction! Removing my foot from my mouth, now.

That's incredibly interesting, especially the idea that it could be used to emulate neurons. I expect that it'd be hard to fabricate them with a very high branching factor, but as I've amply demonstrated, I know very little about this topic.


Currency? Is it a crypto-currency that runs across it? :)


bleeping butterfingers...

Fixed.


Actually it's a portmanteau of "memory" and "resistor". That's important since it's basically a variable resistance resistor that "remembers" its resistance.


Ah ha, thanks for the correction!


I was not technologically inclined in high school, but I distinctly remember a peer talking about how cool they were and how much they could be used to do. This article is the second time I've ever heard the word. So... I guess the fact that it's been 7 years and they're still just "going to do awesome things" is probably a good reason why.


The best thing about memristors is that actually this is how the brain really works (at an electronical equivalence). There is no von Neumann architecture in our cortex, rather neurons in networks which both compute and store memories.


Neurons are routers that apply weights to their inputs.


I talked about the bigger picture, neurons IN NETWORKS.


"The result of the computation is also stored in the memory devices, and in this sense the concept is loosely inspired by how the brain computes."

For anyone who is interested in a simple model of how the brain does this, check out "associative memories". The basic idea is that networks of neurons both store memory (in their synapses) and perform the computations to retrieve or recall those memories.

A simple example is the Hopfield network, a single layer of neurons that are recurrently connected with a specific update function: https://en.wikipedia.org/wiki/Hopfield_network

Another is two layers of neurons that are reciprocally connected called a Bidirectional Associative Memory (BAM): https://en.wikipedia.org/wiki/Bidirectional_associative_memo...

edit: grammar


Really fascinating, the low energy aspect is intriguing.


Weren't HP working on the similar idea? What happened to it?

UPDATE: Looks like memristors production didn't work out for them: https://www.extremetech.com/extreme/207897-hp-kills-the-mach...


https://arstechnica.com/science/2017/10/who-needs-a-cpu-phas...

I found this article very useful in understanding what the work was about..

They are using phase change memory to store data and perform computations.


Didn't HP design and I think prototype something like this with memristors, calling it 'The Machine'?

edit So HP built one with 160Tb of memory, I remember it being proposed with memristors but haven't been able to check if the prototype used them... Does anyone know what is different about IBM's that let's them claim this as a first though?


As shmerl noted, The Machine turned out to be vaporware and they released something significantly different under the same name. HP couldn't mass produce memsistors. :-(

https://www.extremetech.com/extreme/207897-hp-kills-the-mach...


The prototype didn't even have non-volatile memory, because NV-DIMMs weren't available.


Sounds very much like Content Addressable Parallel Processors such as the one that powered the Staran air traffic control system.

Caxton Foster's book is the major text I know on the subject.

https://en.wikipedia.org/wiki/Content_Addressable_Parallel_P...


What they describe sounds like a pipeline architecture or a systolic array, or a network of interconnected computers. None of these are new concepts from an architectural point of view, but the actual dimensioning could be new.


> None of these are new concepts from an architectural point of view, but the actual dimensioning could be new.

Depends what you mean by "architecture". I disagree that a network of computers is comparable to colocating a computing unit with its memory. There are orders of magnitude in difference in communication costs and failure modes, so at some point you just have to acknowledge that the models are fundamentally different, and should be treated as such.

Certainly they are Turing equivalent, so they aren't more "powerful" in a computability sense, we but what's more interesting is the tradeoffs in computational complexity.


Are these benchmarks unusual to anyone else? Things like changing the color of all the black pixels in a bitmap simultaneously and performing correlations on historical rainfall data. Is it because this technology is more suitable for certain types of computations?


I think this is most analagous to simd in terms of issueing a single computation over multiple words of data, which would work well for both zeroing data and operating on scalar arrays.


I got a little confused by the

>scientists have developed the first “in-memory computing”

as your normal GPUs have register and cache memory mixed with the processors. I think the novel feature is they are mixing the processors with non volatile flash like memory rather than with RAM. Which I guess is interesting but the "will speed up computers by 200 times" presumable refers to an old school architecture rather than something like the 15/125 TFLOP Volta GPU which I'd imagine is faster than their thing. (https://news.ycombinator.com/item?id=14309756).


Regarding the application of memristors to AI, here's a bit of a dissenting opinion:

  Unfortunately for neuromorphics, just about everyone else in the semiconductor
  industry—including big players like Intel and Nvidia—also wants in on the
  deep-learning market. And that market might turn out to be one of the rare cases
  in which the incumbents, rather than the innovators, have the strategic
  advantage. That’s because deep learning, arguably the most advanced software on
  the planet, generally runs on extremely simple hardware.

  Karl Freund, an analyst with Moor Insights & Strategy who specializes in deep
  learning, said the key bit of computation involved in running a deep-learning
  system—known as matrix multiplication—can easily be handled with 16-bit and even
  8-bit CPU components, as opposed to the 32- and 64-bit circuits of an advanced
  desktop processor. In fact, most deep-learning systems use traditional silicon,
  especially the graphics coprocessors found in the video cards best known for
  powering video games. Graphics coprocessors can have thousands of cores, all
  working in tandem, and the more cores there are, the more efficient the
  deep-learning network.

From:

https://spectrum.ieee.org/semiconductors/design/neuromorphic...


Micron had eval ram modules with buildin cellular automata for what, 10 years now?

http://www.micronautomata.com/research


For god's sake. I've been pitching this for years .. I can't sigh hard enough.


That's how I feel. I used FPGAs in the late 90s and wanted to try making a parallel chip with say 1024 cores and a few K of RAM per core and then program it with something like Erlang. Then the dot bomb happened, then the housing bomb, the Great Recession, and so on and so forth. The big players got more entrenched so everything was evolutionary instead of revolutionary and I'd say computer engineering (my major, never used) got set back 10-15 years.

But that said, I'm excited that 90s technology is finally being adopted by the AI world. I'm also hopeful that all these languages like Elixir, Go and MATLAB/Octave will let us do concurrent programming in ways that are less painful than say OpenCL/CUDA.


TBH, seeing Chuck Moore talk about its GreenArray forth silicon gave me the inspiration. Since all cores are ALU+RAM and can send data/code to neighbors you get a "smart buffer". I found it so damn exciting I can't stop dreaming about this.

Even a subset of the idea, having blit in ram (swap, row/col-zeroing, etc etc) could reduce pressure on the memory bus.


Have you read about the J-machine? An older research machine that was sort of like that. Trouble is that we need to change software and hardware together to really reap the advantage.


So true. Also, the transputer. The tech is catching up. FPGAs, cores on graphics cards. Different implementations, same patterns as late 90s tech.


There was a talk about this on this year's debconf

Delivering Software for Memory Driven Computing

https://debconf17.debconf.org/talks/206/


The concept of "Smart Memory" has been around for a while...from 2000 at least: https://dl.acm.org/citation.cfm?id=339673


Watched the video, read the article, but I'm not entirely clear how 'in memory' components differ in principle from just having a CPU with very large register sets?


This will catch up to Moore’s law when it does come out one day


Would this be classified as MIMD?

https://en.wikipedia.org/wiki/MIMD


Didn't early Palm Pilots have something they called run in place memory?


So sorry to find this thread on HN while intoxication. I need to write a blogpost about it in a couple of days. In short, these devices are more capable and realizable than most can imagine.

Disclaimer: Neuromorphic computing with PCRAM devices is my MSc and future PhD thesis topic.


If only there was room in the margin.


Serverless applications

Computerless computing!


feels like IBM comes up with a new architecture every week and a half or so, and then you never hear from them again. It's like Reddit and it's be-monthly cures for aids and cancer




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: