Bad article; lots of hyperbole, no real details on what the computational units are like, and why they are cool.
Brains are a non-von-neumann architecture, yes, and brains can do cool stuff. But there's a huge complexity challenge to using a particular new architecture to do the things brains do.
Even if we had an architecture that perfectly physically modelled the low level components of the brain, there'd be a huge gulf in understanding to be crossed before we could connect the components properly to do useful things.
Imagine its 1700, and a traveller from the future gives a gift of a set of FPGAs - easy to program with whatever logic gates people want.
How long would it be before people build the first desktop computing environment?
There isn't even any boolean algebra around at that time; people wouldn't know where to start wiring the logic blocks together to do useful things; all of computer science still needs to be discovered.
We should read articles that claim new architecture X is like the brain, and will hence do brain like things Y, sceptical that there's a whole new science in the gulf of understanding that needs to be crossed to get from X to Y.
Lots of buzzwords in the article and in the video, lots of "completely new", "revolutionary" etc., yet it's hard to see what's exactly new about their approach. It's certainly interesting to investigate computers with a neural-network-like architecture, but it is hardly a new concept, actually the first neural networks were implemented in hardware. Strangely I have trouble finding English-based resources about that, but even if you don't speak German you can have a look at Frank Rosenblatt with his original Perceptron here:
From reading (not viewing the videos), it appears to be an [artificial] neural net implemented in hardware. So, it's a novel, brain-like architecture for multi-core, programmed in the same way as neural nets (i.e. 'trained')
I have doubts about [artificial] neural nets, since they never really went anywhere, and I don't think scale or hardware implementation will automatically change that.
But playing with novel hardware can give novel perspectives, leading to new insights - and IBM is quite right that our present architectures seem to have stalled. That is, apart from GPGPUs.
While IBM might have produced their first working chips, these chips have been around for a decade now. I remember in 2009 when this project started that IBM created quite the buzz by claiming they made a cat-brain computer chip feasible and had simulated a cat brain. Great to see these claims realized.
In 2006 we had the snail neuron chips and around 2000 Hugo de Garis was all the rage with his brain-building chips.
To me: The hunt for simulating human intelligence is more of a marketing game, than a real scientific pursuit. AI already exists, to make it seem human seems like a silly endeavor.
In 2009 the IBM project got very harsh critique from the http://bluebrain.epfl.ch/ bluebrain project leader Henry Markram. Extraordinary claims without proof can hurt the field of AI, at least the public perception of it.
I still think part of the 2009 critique still applies, though ego's likely also play a role:
You told me you would string this guy up by the toes the
last time Mohda made his stupid statement about
simulating the mouse's brain.
I thought that having gone through Blue Brain so
carefully, journalists would be able to recognize that
what IBM reported is a scam - no where near a cat-scale
brain simulation, but somehow they are totally deceived
by these incredible statements.
I am absolutely shocked at this announcement. Not because
it is any kind of technical feat, but because of the mass
deception of the public.
1. These are point neurons (missing 99.999% of the brain;
no branches; no detailed ion channels; the simplest
possible equation you can imagine to simulate a neuron,
totally trivial synapses; and using the STDP learning
rule I discovered in this way is also is a joke).
2. All these kinds of simulations are trivial and have
been around for decades - simply called artificial neural
network (ANN) simulations. We even stooped to doing these
kinds of simulations as bench mark tests 4 years ago with
10's of millions of such points before we bought the Blue
Gene/L. If we (or anyone else) wanted to we could easily
do this for a billion "points", but we would certainly
not call it a cat-scale simulation. It is really no big
deal to simulate a billion points interacting if you have
a big enough computer. The only step here is that they
have at their disposal a big computer. For a grown up
"researcher" to get excited because one can simulate
billions of points interacting is ludicrous.
3. It is not even an innovation in simulation technology.
You don't need any special "C2 simulator", this is just a
hoax and a PR stunt. Most neural network simulators for
parallel machines can can do this today. Nest, pNeuron,
SPIKE, CSIM, etc, etc. all of them can do this! We could
do the same simulation immediately, this very second by
just loading up some network of points on such a machine,
but it would just be a complete waste of time - and
again, I would consider it shameful and unethical to call
it a cat simulation.
4. This is light years away from a cat brain, not even
close to an ants brain in complexity. It is highly
unethical of Mohda to mislead the public in making people
believe they have actually simulated a cat's brain.
Absolutely shocking.
5. There is no qualified neuroscientist on the planet
that would agree that this is even close to a cat's
brain. I see he did not stop making such stupid
statements after they claimed they simulated a mouse's
brain.
6. You should also ask Mohda where he got the notion of
"reverse engineering" from, when he does not even know
what it means - look the the models - this has nothing to
do with reverse engineering. And mouse, rat, cat,
primate, human - ask him where he took that from? Simply
a PR stunt here to ride on Blue Brain.
That IBM and DARPA would support such deceptive
announcements is even more shocking.
That the Bell prize would be awarded for such nonsense is
beyond belief. I never realized that such trivial and
unethical behavior would actually be rewarded. I would
have expected an ethics committee to string this guy up
by the toes.
I suppose it is up to me to let the "cat out of the bag"
about this outright deception of the public.
Competition is great, but this is a disgrace and
extremely harmful to the field. Obviously Mohda would
like to claim he simulated the Human brain next - I
really hope someone does some scientific and ethical
checking up on this guy.
I'm not in the field of AI, but as best I can tell imitating the brain's setup is not about "seeming more human". Passing the Turing test is not about having more compute power.
Rather, it's an exploration into a fundamentally different form of computing, right up there with dreams of transistors with more than 2 distinct states, or light transistors, or biological transistors... It's only different in that it is a foray into alternate architecture, rather than alternate physics.
I was alluding to a popular notion of AI that: It isn't AI if it doesn't feel like a human to us. I agree there is valuable knowledge to be gained in brain simulation and DNA computing.
AI press coverage has a tendency to "humanize" all AI research. (So your neural network is on par with a rat brain? Can we add such chips to our brain and create androids?)
I understand why this happens -- to make it more accessible, get more PR and to get more research grants -- but I do think it is silly. It makes people ask "When will we have AI?" when AI is already here, but they don't accept it, because it lacks human emotions. By taking it "too far", some make human intelligence a prerequisite for AI, when true AI could be just as well alien. Some AI researchers have no qualms in playing along with this notion.
This means that humanity understands how the most complex thing in the universe (the human brain) works?
Highly doubt it, it smells like crappy marketing.
Here's the rationale for the Blue Brain Project and Human Brain Project (and I guess for this, but I'm not familiar with this work):
We understand a lot about how the brain works at the low-level (neuroscience, molecular biology, epigenetics, etc.), and a lot about how it works at the high-level (psychology, cognitive science, social sciences, etc.). But there's a big void in between where we know precious little, and don't really have the research methods to investigate well.
So the purpose of these projects is to bridge the gap using simulation. The hypothesis is that if you simulate the low-level parts in exquisite detail, the high-level phenomena should appear as emergent properties. If there's any difference, the hypothesis continues, then that implies that your knowledge of the low-level parts is incomplete.
The Blue Brain Project has had success in this matter; here's one off the top of my head: a couple years ago when they wired up the first neocortical column (a cylinder of brain tissue about 0.5mm x 2mm containing something like 10k neurons that acts as a unit in the mammalian neocortex), they simulated it over several seconds of real-time. They found that it showed a pulsing behaviour that bears a striking resemblance (in frequency and shape) to alpha waves visible on an old-fashioned EEG. Is it a coincidence? Possibly, but it's also very tantalizing.
I don't see how this is different from having a huge mainframe to simulate an ANN with billions of neurons. First of all, they are using integrate and fire (or Izhikevich) models that are very simplistic and have already been studied since the 60s. Neurons are not like that at all and one needs a higher level of detail to successfully reproduce their output. The idea here is that if you cram too many IF neurons in a way that resembles the connections in the human brain you will have some thinking machine. I don't think so.
I think Markram's response to the "cat brain" project is still valid to a large degree. He has proposed the "Human Brain Project" instead, that will attempt to simulate a brain bio-realistically, the analogous of an "LHC for neuroscience" which imho, is more important:
This is a great article! I really think this can be an exciting frontier for computer tech and for human tech. I want these integrated into my brain one day!
Yes, another way to give you extra brain power. It will receive sensory input from say, your eyes, ears, touch, taste, smell, and then it will output signals to surrounding neurons.
Great way to keep up with Strong AI. Depending on how powerful the chips are you could potentially have the equivalent of hundreds or thousands of brains in your head. It will give us answers to problems the same way we already receive answers to problems, the answers will just pop into our consciousness. In fact, it could be made so that it replaces most of your neural cortex. This way you will be able to upload those chips into a computer and voila! Instant immortality.
The era of the uploadable human brain is here (OK, probably not for one, two, or three hundred years). The chip essentially takes over your human brain recording all experiences. It is implanted at birth. When you die the chip can just be implanted into a humanoid robot or a clone or you can live in virtual reality. Of course, there will be a 1000 and 1 problems to solve but sounds like one way to create uploadable humans. And to prevent death you are constantly backed up a la cylon stile (Battle Star Galactica).
Wow, this could definitely work!
A nice twist would be if a virtual heaven were created. And you could only get in if you were "good" (whatever definition they decide to use). This heaven would be necessary because there are not enough resources to keep running all the servers necessary for this virtual heaven. So at some point in history somebody decided that to control virtual population only those that were "good" would be able to get in. Those that were bad would receive eternal death.
After several thousand years a two layer system evolves. Virtual Reality and Virtual Heaven. The ones in virtual heaven hold all the strings and choose new members from Virtual Reality to join them if they have been "good". The ones in virtual reality have forgotten about their initial history and don't realize they are in a virtual reality. Hence many don't believe in this "heaven" and consequently very few actually make it through. Entire religions are created to worship this heaven and to try to get in.
Well, you get the idea. Sort of got side tracked there. Sounds like an idea for a book.
This is a common misconception. Advances in neuronal simulation do not at all map to traditional computing. The mere fact of running on a computer does not mean the simulated brain gains features intuitively expected of a computer (like rapid maths, or being able to copy knowledge).
Sure, but I like to dream about developments that I might actually get to see. Full human level AI? Making your brain smarter? 6th senses? I don't see these happening ANY time soon. But a little coprocessor with dedicated functionality seems like an almost-realistic enough possibility to enjoy thinking about.
Yes, baby steps. Reminds of the stories of people attempting to fly with bird-like wings. Eventually somebody came along and figured out the right way to fly and it was not with flapping wings. Same here, making a lot of attempts that may seem stupid or laughable or even impossible but eventually somebody will figure it out.
Brains are a non-von-neumann architecture, yes, and brains can do cool stuff. But there's a huge complexity challenge to using a particular new architecture to do the things brains do.
Even if we had an architecture that perfectly physically modelled the low level components of the brain, there'd be a huge gulf in understanding to be crossed before we could connect the components properly to do useful things.
Imagine its 1700, and a traveller from the future gives a gift of a set of FPGAs - easy to program with whatever logic gates people want. How long would it be before people build the first desktop computing environment?
There isn't even any boolean algebra around at that time; people wouldn't know where to start wiring the logic blocks together to do useful things; all of computer science still needs to be discovered.
We should read articles that claim new architecture X is like the brain, and will hence do brain like things Y, sceptical that there's a whole new science in the gulf of understanding that needs to be crossed to get from X to Y.