Hacker News new | past | comments | ask | show | jobs | submit login
A Brain Built from Atomic Switches Can Learn (quantamagazine.org)
124 points by burkaman on Sept 20, 2017 | hide | past | favorite | 20 comments



I think there's a chance we'll build an AGI within the next few decades. But I doubt we'll understand much better how it works than we understand today's human brain.

It will be an enormous advance, of course. It'll probably perform better than humans, it'll probably live indefinitely, there's a chance we'll figure out how to clone it, and we'll have much more ethical latitude to experiment with it. But I'm not sure we'll be able to explain how we built it; rather, the best we'll be able to say is that we helped it happen.


Agreed.. the more i think about it the more it seems there may be nothing meaningful to "understand" past a certain point (not to say we are at that point now). If understanding is reducing complexity but the brain turns out to be a huge network, there may be an irreducible complexity to it.


Perhaps the AGI will itself be what helps us pierce that veil of complexity, like an ELI5 that can dumb down its own mechanism for mere humans.

Just imagine where we'd be today if we'd had a clone of Richard Feynman in every high-school math and physics class for the past 40 years.


Isn't that what Stephen Wolfram talks about in his "New Kind of Science" book.


Be that blind watchmaker you want to see in the world!


I'm oddly reminded of Asimov's "I, Robot", where the positronic brain was basically described as a silvery (platinum and iridium) device that Just Worked. And nobody knew why.

https://en.wikipedia.org/wiki/Positronic_brain


This is IMO a strong indication of "emergent complexity" and that consciousness isn't a thing but rather a whole system with no specific center of origin.

The interesting questions is whether any system can become aware as long as the feedback loop involves some interpretation of a pattern-recognizing feedback loop with memory and as long as it reaches big enough complexity.

I am aware that this isn't proving anything about consciousness but it's in line with some of my one pocket-philosophical thoughts.


The experiment is thought-provoking. Along somewhat similar lines I have a (completely unsubstantiated) hunch that strong AI ultimately will arise from something like this architecture rather than anything based on the current wave of machine learning. Among several interesting quotes this one was particularly striking:

[O]ne of the world’s largest and fastest supercomputers, the K computer in Kobe, Japan, consumes as much as 9.89 megawatts of energy — an amount roughly equivalent to the power usage of 10,000 households. Yet in 2013, even with that much power, it took the machine 40 minutes to simulate just a single second’s worth of 1 percent of human brain activity.


For consciousness you need embodiment and goals. Goals shape attention, which shapes perception, which generates consciousness. The environment and its dynamics are essential in creating representations and learning behavior. So I think that in order to have consciousness you need a loop made of environment + agent + reward, not just a recurrent neural net with memory.


Yet here we are, a product of the blind watchmaker. If we can develop embodiment and goals, why shouldnt it be possible for other systems?


Embodiment and goals sort of preceded intelligence, not the other way around. A lizard has them. A dragonfly has them. A flatworm has them.


Exactly we are just aware of ours.


Sadly, not always.


This was one of the most interesting quotes in the article:

"The way to do this is by training the device: by running a task hundreds or perhaps thousands of times, first with one type of input and then with another, and comparing which output best solves a task. “We don’t program the device but we select the best way to encode the information such that the [network behaves] in an interesting and useful manner,” Gimzewski said."

Since the "weights"/connections in their network are not easy to modify, they must instead figure out what kind of machine they have made by feeding it data and looking at the results. This does seem limiting in that the encoding might have to be arbitrarily complex to get the desired output.


So ... a large part of the intelligence is actually coming from the user's decision of what encoding to use.


>Applying voltage to the devices pushes positively charged silver ions out of the silver sulfide and toward the silver cathode layer, where they are reduced to metallic silver. Atom-wide filaments of silver grow, eventually closing the gap between the metallic silver sides. As a result, the switch is on and current can flow. Reversing the current flow has the opposite effect: The silver bridges shrink, and the switch turns off.

makes me wonder if there is any potential as a non-volatile memory chip.


I think that this is the same basic concept as a memristor; current flowing one way probes the state, enough current flowing the other way changes it.


That's the operating principle for CBRAM memristors.


>The silver network “emerged spontaneously,” said Todd Hylton, the former manager of the Defense Advanced Research Projects Agency program that supported early stages of the project.

I'm torn as to whether the article was intentionally editorialized to be mildly disconcerting, or if it actually is.

Setting aside the seemingly perfect Skynet quote for a moment: this a substrate that self-organizes itself in response to electrical input—it displays emergent behavior in similar fashion to the human brain—and it does so while consuming an amount of power that's far closer to the brain than modern integrated circuits computing the same task.

I have a couple of questions:

1. The research sounds like it's been ongoing for quite a long time. What's preventing the creation of larger-scale silver nanowire meshes? Or networking many 2x2mm meshes together at ridiculous scale?

2. What's the latency like between the artificial synapses? As far as I'm aware, synaptic latency in the human brain is on the order of milliseconds. Meanwhile latency in traditional ICs is on the order of nanoseconds if not picoseconds.


My guess is that even if they built a bigger mesh, they wouldn't know what to do with it. Having a smaller mesh lets you explore applications a bit better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: