Hacker News new | past | comments | ask | show | jobs | submit login
And everyone gets a robot pony (scienceblogs.com)
69 points by rimantas on July 15, 2012 | hide | past | favorite | 46 comments



I like this part of the OP:

"If singularitarians were 19th century engineers, they’d be the ones talking about our glorious future of transportation by proposing to hack up horses and replace their muscles with hydraulics. Yes, that’s the future: steam-powered robot horses. And if we shovel more coal into their bellies, they’ll go faster!"

This is very true. Having been peripherally involved in a project to slice up and image just a cubic millimeter of ferret brain, I think I can safely say that we are not going to be able to accomplish anything remotely like reading out the state of a human brain anytime in the foreseeable future. Just reconstructing the 3D geometry of the neurons and synapses in that 1 mm cube turns out to be a gargantuan feat, much less recording all the other stuff that you would need to record.

Sure, things like this are fun to think about, and I have nothing against fun. But any thoughts of being able to actually do this, fall into the realm of Science Fiction. There's no point in putting any serious effort into figuring out how we might accomplish this goal now, as by the time we are able to actually do it, as the OP pointed out, our knowledge will be much different. And any ideas that we have now--like dated Science Fiction--are just going to be thought of as having been terribly quaint.


In addition to his confusion over the clock frequency speedup argument (as pointed out by cultureulterior), PZ Myers doesn't seem to argue from a concrete idea of how detailed a scan will have to be to capture the important brain functions. The feasibility of brain scans is strongly sensitive to the (currently disputed) necessary level of detail. It doesn't make sense for him to say

> We can’t even record the complete state of a single cell;

unless he's just using it as a general statement about out technology. It's completely possible that recording the state of a cell with molecular resolution remains beyond our control even when we can scan brains to the resolution necessary to simulate human cognition.

Also, when I heard that

> With the most elaborate and careful procedures, they report excellent fixation within 5 microns of the surface, and disruption of the tissue by ice crystal formation within 20 microns.

my estimate of brain scan feasibility went up. Of course it's true that we don't yet "have a method to lock down the state of a 3kg brain"; we're discussing the far future!

Pre-WWII, I think someone could have made very similar arguments about computers by pointing out how fast they would have to be to do something crazy like a 3D simulation. By golly, they'd need billions of transistors. Yes, computers are different than brain scanning, so the scalability of one need not imply the scalability of the other. But the point is that you can't just argue "look how hard this is now; it will continue to be hard in the future". You have to argue about why progress in brain scanning will be slow (compared to computers).


I think what the article is pointing out is that:

1) People who have never tried to scan a brain say "oh it's totally doable, why haven't we done this with simple organisms?"

2) People who are actually trying to scan brains say "Um, have you even read any of our papers, we know it would be useful, but it's hard"


> People who have never tried to scan a brain say "oh it's totally doable, why haven't we done this with simple organisms?"

Who are these people? Certainly not Chris Hallquist in the post linked by Myers.

And where does Myers say this? I don't see him attributing to anyone the claim that the scanning of simple organisms should already have happened.


You will pretty much have to have molecular resolution!

There are thousands of (and maybe many orders of magnitude more) different types of ion-channels - the basic unit of electric transmission in neurons. Unless you can identify what types of ion-channels are located throughout the neuron, you will have no hope of reproducing its function.


Hi, you should know about http://channelpedia.epfl.ch/


How do you propose to translate channelpedia into a complete functional description of an observed brain?

Or is the idea to somehow make a working model of the "average" brain inferred from a patchwork of bits of high-level aggregate data mined from various papers and originally gathered in many different contexts?


yes, that's a good start. also, nothing beats actually algorithmically reconstructing data from image slices http://3scan.com/ or setting up NEURON simulations.


Below is some background reading on whole brain emulation from the Future of Humanity Insititute. It isn't hard to come to a better understanding of the present state of research and plausible future goals than is demonstrated by the author of this piece.

http://www.fhi.ox.ac.uk/Reports/2008-3.pdf

"As this review shows, WBE on the neuronal/synaptic level requires relatively modest increases in microscopy resolution, a less trivial development of automation for scanning and image processing, a research push at the problem of inferring functional properties of neurons and synapses, and relatively business‐as‐usual development of computational neuroscience models and computer hardware.

"This assumes that this is the appropriate level of description of the brain, and that we find ways of accurately simulating the subsystems that occurs on this level. Conversely, pursuing this research agenda will also help detect whether there are low‐level effects that have significant influence on higher level systems, requiring an increase in simulation and scanning resolution.

"There do not appear to exist any obstacles to attempting to emulate an invertebrate organism today. We are still largely ignorant of the networks that make up the brains of even modestly complex organisms. Obtaining detailed anatomical information of a small brain appears entirely feasible and useful to neuroscience, and would be a critical first step towards WBE. Such a project would serve as both a proof of concept and test bed for further development.

"If WBE is pursued successfully, at present it looks like the need for raw computing power for real‐time simulation and funding for building large‐scale automated scanning/processing facilities are the factors most likely to hold back large‐scale simulations."

---

And some further, easier background reading:

http://www.fightaging.org/archives/2012/06/mind-uploading-at...

http://www.fightaging.org/archives/2009/02/the-age-of-artifi...


On a more flippant level, isn't it pretty obvious that robot ponies are going to be a thing that is possible to have some time in the next century or two? We've already got something that's kind of moving in that direction, and this was a few years ago: http://www.youtube.com/watch?v=W1czBcnX1Ww


Excellent sources, thank you for finding and abstracting those.


Recently I've been thinking this kind of thing is more likely to be a gradual process, and not likely to take the literal form that's been discussed.

But as we use systems that expand our awareness, are adaptive to us as individuals, and interface with us more tightly, we'll slowly become a sort hybrid consciousness. And over time, we'll become more and more "online", until some day, the machine portion of that consciousness will persist in a meaningful way after the biological part has come to an end.

In other words, we might well come to a point where we "upload" by shaping the systems we interact with over time. But it won't be the same as the bio version of our consciousness, it will be something else.

Whether we will come to a point where that distinction doesn't matter to our consciousnesses or not is an open topic.


I'd guess what one could try realistically with the current technologies is linking two mice [via Ed Boyden like devices] and observing, if that could produce an effect of extended awareness between these mice [like pain stimuli, etc].


This problem is hard. Extremely hard. But that does not mean that it is not theoretically possible (if very, very, very unlikely).

A lot of the brain uploading people seem to look at this through destructive brain scanning (correct me if I'm wrong). That sounds like a terrible idea - why does it have to be all in one go and not reversible.

What if - for example (I'm spitballing) - you move into the brain slowly replacing each live neuron with an artificial neuron (far fetched I know - nanotech is ridiculously hard/difficult), instead of going slice by slice on a frozen brain.

There is no reason that this slow "viral" method couldn't be done (or reversed - replace artificial neuron with neuron). This is akin to how we deploy distributed systems - create a compiled slug and push it out node by node via bittorrent. The change is tested on each node, and slowly moves out. If anything goes wrong - just roll back to previous slug.

Once you have full conversion - upload away (as I presume getting states of artificial neurons is relatively easy vs. organic cells). Have no doubts about it - this is a super hard problem. But it is not impossible.



He completely misunderstands the clock frequency speedup argument.


Just so it's clear to everyone, he says

> And then going on to make more ludicrous statements…

> > Axons carry spike signals at 75 meters per second or less (Kandel et al. 2000). That speed is a fixed consequence of our physiology. In contrast, software minds could be ported to faster hardware, and could therefore process information more rapidly.

> You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed…how? You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!” Jebus. As if timing and hysteresis and fatigue and timing-based potentiation don’t play any role in brain function; as if sensory processing wasn’t dependent on timing. We’ve got cells that respond to phase differences in the activity of inputs, and oh, yeah, we just have a dial that we’ll turn up to 11 to make it go faster.

The only explanation I can think of is that he thinks the proposal is to scan this brain so that it can be duplicated and run as another fleshy brain. But obviously, the idea is to simulate the entire brain on a computer, so that simulating the brain faster than real life is just a matter of having a fast computer. If he missed this, it makes me a bit skeptical of his other criticisms.


I think what he means is that the brain is multi-threaded with optimistic locking and hardcoded timing constants.

Try running old MS-DOS games inside an emulator. Many of them will act quite funny when you turn up the clock-speed.


That's not exactly turning up the speed, that's emulating a faster CPU.

Try taking a game boy emulator and putting it on fast forward. Works perfectly. If you have full emulation of a system you can go arbitrarily fast.


Yes, it was a flawed analogy (aren't they all...).

However, these DOS-games usually fail in sped up emulations because they make assumptions about external inputs such as the Real Time Clock.

I think the point of OP was that we can't reasonably speed up the "RTC" in a brain emulation if you want it to interact with the real world, because that would break all sorts of hardwired assumptions.

For a simple example, if you ran your brain at 4x speed then it would perceive everything in super-slow-motion. At that speed it would already have difficulties to understand when you speak to it (at the least it will have to be a very patient brain).

At higher speeds pretty much all cognitive functions would probably break down - unless you feed it recorded inputs that have been accelerated to match the brain-speed.


That is an issue, but there is a huge difference between having to slow the brain back down in certain situations and being unable to speed it up at all.

I don't expect to play an entire game on fast forward, after all.

Edit: new line about breakdown: I just assumed that whatever input was given would be sped up too. That part of the project seems far less complicated than the brain simulation itself.


My knowledge of brains is limited, but I'd think the issue remains the same even if you cut off all external inputs.

Basics like memory decay are also tied to the system clock. So if you run your brain at 1000x speed then it would probably simply forget everything almost immediately.

And if you make a "simple" patch that prevents it from ever forgetting anything then it would be overwhelmed because it is only wired to deal with a certain amount of memories at a time.

In terms of the DOS-Game analogy: We may be able to patch a game that originally ran in 256kb of Ram to run in 2GB and actually fill that up (because we disabled the garbage collector). But the game probably uses algorithms that break down when faced with such a large dataset.

At this point we're down to having to actually understand the game (or brain) in detail, in order to make the changes required for running at higher capacity.


Actually having a higher capacity will be tricky, yes. But at least there won't be cell decay in the scientists working 4000 hour weeks to figure it out.


I agree, but you could always simulate the brain's environment as well. Then you could speed up the environment with the brain. Bridging the gap might be annoying, for the brains, waiting to communicate with the glacial pace of the squishy rubbish real world humans, but I'm sure they'd get over it.


Although it is also a very good point that even with modern hardware many orders of magnitude faster than the emulated hardware, most emulators have to resort to timing hacks to make everything run smoothly, and because of timing inconsistencies when locking, parallelism is of limited use even when emulating multiple hardware components that originally ran in parallel. To emulate a human brain, we're probably either going to need far more sensitive locking across multiple cores than is currently even imagined, or we're going to have to emulate the whole massively parallel thing on a single thread on a CPU much, much more powerful than a brain.


My interpretation of this paragraph: Brain function depends intimately, not just on the relative timing of things internal to brain, but also on the timing of inputs to the brain. It is unreasonable to assume that we can speed up sensory information without affecting brain function. If this is the case, it doesn't matter how fast you can run the simulated brain, you won't get meaningful outputs unless it's run at a `normal' rate.


One interesting thing about brains is that they are rarely found by themselves foraging in the wild. Usually they are attached to things. In fact, they are involved in many feedback loops involving sensory input and various effectors. As well as being directly attached to peripheral nervous systems and encased in massive bodies with various physical constraints, circulating hormones, social contexts...

It is hard to imagine raising a baby brain to chess-playing maturity without tons of informational input acquired by interaction with the world. (Even just keeping normal children in the basement is a profound intervention, and they still actually experience quite a bit). So I suppose you will have to speed up your realistic world simulation as well.

The brain is not a personal computer and its development is not a matter of factory production because it is not a piece of technology designed by people.


I agree, assuming that you can completely isolate the system time from real time and that we'll have powerful enough machines to be able to dial it up.

Everything else he said is on the mark, though. We're nowhere close to even understanding everything that's going on inside a brain, let alone migrating it to a completely different medium.


Kurzweil's next book is supposed to be about the advancing state of brain imaging. His last books seemed to reduce the brain to its computational capacity, and he kind of does some hand-waving about how our brain scanning technology is getting better at accelerating rates.

Will he address the complexity of the cell and the brain, or will this fall into the category of stuff that we'll of course understand in the future?


The article is pretty funny, but it's all true: the complexity of our brain is just beyond our imagination at the moment.

Saying that we could preserve the brain or make a copy of it is like saying that rockets are just open ended combustion engines and aliens have been visiting us for a long time.

No, there's a reason why the former is called "rocket science" and the latter is impossible because the speed of light doesn't allow it (it's been proven time and time again that it's constant and there's nothing faster).

The Sun is the most powerful energy source we know of, and it's "just" a result of billions of years of gases and particles coming together and somehow successfully starting a chain reaction.

Then again, without the kind of optimism that Sci-Fi fans have we would not have most of the technology we enjoy today...


Hacker news talked about this problem recently http://news.ycombinator.com/item?id=3987660. Unique hardware is always difficult to emulate, and that's why you cut corners. Cutting corers in this field has gotten us quite far http://www.scientificamerican.com/article.cfm?id=graphic-sci....

But the OP of the article is ignoring the philosophical problem at hand: would a perfectly simulated brain bring about sentience? To me the answer is no. A machine can fake it, but its the complex noise of nature that gives us our qualia.


As another person who's spent some time thinking about this: I disagree. Why couldn't a machine simulate the noise as well?


And yet, very few people would have thought this possible 30 years ago: http://singularityhub.com/2010/06/12/monkey-controls-robot-a...


Totally unfair of PZ since Hallquist did say "surprised ... if it took only a couple of decades". 20 years of computational/biological advances should give us quite a lot.



THANK YOU! Very few technologies develop at exponential rates like computer science has. In general, the learning curve is steep and the progress is slow.


Do we really need to scan the brain? One of the greatest tech companies out there, Google, makes almost all it's money figuring out the right ads to put in front of people. They pay their employees the most for working in that area too. Eventually these ad companies like Google will be able to model people based on their life data. :) Of course it will be to figure out which ads to show them and make more money, not to run the model and give it life. But maybe they'll let the model act as your concierge for a fee.


The culmination of these billions of dollars and research and engineering will be a real life manifestation of 'clippy'.


I know you are kidding, but man, the algorithms are actually still really bad. I bought one of those Amazon Kindles.. the low end model that displays an ad when the thing is on standby. Every other ad is for some sort of women's hair care product or service.

The only really girly thing on it is a Jane Austin novel. I mean, Jesus. One Jane Austin novel and suddenly I'm the sort of chap that has his nails done?

Of course, it could be amazon being really clever. I mean, sure, they aren't going to make much off the ads, but how many dudes will pay the extra $40 for the ad-free kindle if the ad kindle, when it's off, shows ads that make other people think the thing is loaded up with 'twilight' novels?


That was a great conceipt for the TV series "Caprica". In reality, it's completely nonsensical!


I have a question regarding this brain scanning and uploading idea:

Won't Heisenberg's principle make this impossible?


No. Why would it?


Well, to me it seems that fine grained brain scanning would need something like Laplace's demon:

http://en.wikipedia.org/wiki/Laplace%27s_demon


Fortunately, neurons are still orders of magnitude above atoms in terms of volumetric density. So it is not that bad. Simulating all of the atoms in the volume of a brain individually might be a pain.


@zanny: A lot of the action in the brain seems to take place at the synapses. These are much smaller structures, and a useful brain scan will need to make precise measurements of the strength of each synapse. I need to study this more closely, but at the moment, I think these measurements maybe impeded by the uncertainty principle.


He doesn't take the point about robot horses very far, but that's where the conversation will go in time.

Maybe flight is a better example. We always knew it was possible to fly because we saw birds doing it. Birds fly by flapping their wings. We tried to make flying machines that flap their wings. They didn't fly.

Birds evolved to flap their wings because it exploited the technology available to evolution. Birds are made out of the same stuff as other animals, just tweaked a bit to be lighter. Flapping is a lot like running or swimming physiologically -- swinging a limb back and forth in a certain pattern. Evolution can do a lot with warm-blooded animals with limbs.

When we figured out how to make artificial flying machines, the solution exploited the advantages of our technology. Planes are made out of the same stuff as cars, just tweaked a bit to be lighter. Spinning a propeller is a lot like spinning a wheel. We can do a lot with combustion engines that spin things.

The missing pieces were the principles of aerodynamics, some technology (IC engine, construction materials), and some engineering specific to the problem of flight. Since we figured it out, we're able to make machines that fly faster and higher than anything in nature. Vastly faster and higher.

Right now we're trying to build machines that think -- not just compute. We know that thinking is possible because we see brains doing it. Brains do it with neurons and synapses. We tried to make thinking machines out of neurons and synapses. They didn't think.

Brains are made of neurons because that's what evolution had available. Neurons are a lot like other cells in the body -- blood cells, skin cells, muscle cells. Evolution is good at specializing cells to do all kinds of jobs.

When we figure out how to make thinking machines, the solution will exploit the advantages of the technology of the day. It will look like something we already have, but tweaked. We're good with transistors and silicon. We're good at computer networks.

We definitely have missing pieces. We don't know much about the principles of intelligence. We understand logic, but how do you get intelligence from logic? And maybe we're still missing some key technology to make it work. (Memristors? Graphene?) And once we have the principles and the technology, it'll take some engineering to make it work, but we'll make it work. We'll building thinking machines that are vastly smarter, wiser, and more clever than nature ever did.

The point is that if we ever upload a brain, it will be like building a mechanical horse -- an over-engineered gimmick, a parlor trick. We'll already have done much better at AI by approaching the problem from a different direction.

It doesn't bode well for humanity though, in the long term. Just about the only jobs left are thinking jobs. What happens when it's a waste of time for a human to think, just like it's a waste of time for a horse to pull a plow? We didn't declare war on horses, Terminator-style. We just didn't keep very many around.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: