Hacker News new | past | comments | ask | show | jobs | submit login
Ray Kurzweil does not understand the brain (scienceblogs.com)
208 points by nollidge on Aug 17, 2010 | hide | past | favorite | 221 comments



I'm a life-long programmer who presently works in software, but I studied biology in college... mostly because I wanted to learn about learning by studying how living systems do it.

Biological systems are nothing like anything we would ever engineer, and to understand them we must remove our "anthropocentric engineer goggles" and look at them for what they are. Analogies between biological systems and computers, software, machines, etc. are very loose analogies meant to illustrate a point. Never take these analogies too seriously.

DNA is not a program-- it is a molecule, and one that may very well do things at the quantum level that are biologically important.

The brain is not a neural network. It is an interconnected colony of living cells of a variety of types, and it has been shown that all types of cells in the brain are involved in cognition.

We are nowhere near anything with the parallelism or information density of the brain. Getting close would take an advancement in computer technology of the magnitude of the transition from vacuum tubes in individual boxes to a 32nm Core i7. Nature has had billions of years, and it is way ahead of us. There's a nascent field called quantum biology that suggests that the brain may very well be a quantum computer, so maybe quantum computers are the vacuum tubes->ICs scale transition I am speaking of.

I think it's now possible with a big rack of multi-core machines in an MPI cluster to approach the capabilities of a fruit fly's brain.

Cells are not machines. I don't think we have the language to really describe what they are perfectly, but the closest I can come is "stochastic quantum probability field device." A bacterial flagellum is not a "motor," it is a quantum-scale chemo-electro-motive... well... our language breaks down. Like Feynman said, don't tell stories. Just speak literally and then use math.

Actually, come to think of it, there's one engineering analogy that might work for biology. Biology is quantum-scale nanotechnology. Yeah, that's pretty close.

(On a related tangent, I've found that many engineers are sympathetic to intelligent design type arguments against evolution. This is because they try to think about biology like engineers and take these machine analogies literally. It just doesn't work like that.)


Getting close would take an advancement in computer technology of the magnitude of the transition from vacuum tubes in individual boxes to a 32nm Core i7.

...so about 35 years? ;)


Maybe, though I am concerned about processing power stagnating.

Advancement is governed by economics as well as technical capability. There must be demand for new technology, or a field stagnates. Witness aviation as an example... utterly stagnant outside of military niche applications.

People seem to no longer want faster and faster computers, and the market seems to be moving toward lighter-weight lower-power portable devices like netbooks, the iPad, etc. Those have slower CPUs than current-generation desktops. I suppose the extreme gamer and server/datacenter markets are still driving performance, but for how long?

One problem is that programmers are not using the capabilities of current-generation processors, partly because the dominant OS (cough Windows cough) makes it horrifically painful to deploy desktop apps. This drives all development to the web and turns desktops into thin clients. In the end this kills demand for performance outside the datacenter market.


You are correct for consumer electronics. But commercial and scientific computing are driving increases computational power and density that do not seem to be letting up. In the 4 years I have worked in hosting the amount of server you can get in a 4u box has gone from 8 2.5GHz cores to 24 2.5GHz cores and from 32GB to 256GB. The storage requirements and compute requirements for applications are also increasing substantially.


You're confounding peoples preference for usability (including portability) with a preference for applications with low computational demands.

If you could pack the "extreme gamer" capabilities of a Playstation or an Xbox into a format as "usable"[1] as an iPad... Then you would of engineered the next iPad.

The iPad was able to come into existence because we've finally hit the point where we can cram that much computation into a small factor form (along with all the other engineering advances like wireless networking, reducing power consumption, improving display and improving battery life).

Most of those advances are directly descended from the pushing of the bleeding edge. Companies / people are not simply going to go "oh we've got iPads now. So no need to make anything faster / better / bigger".

[1] By usable I'm not talking about some magical Jobsian property of the device. I'm not even talking about the software interface. I'm talking about being able to surf the web / post to your blog / whatever while on the toilet. Try doing THAT in 1995.


Or 500, or 10,000, who is to know. The problem is that the way the brain seems to be organized is around very simple (at face value) processing components that operate at a ridiculously low frequency in a massive parallel manner that we programmers can only dream of, at a power budget that would make Ebenhaezer Scrooge look very happy indeed.

If 35 years (according to Henry Makram, linked below it's only a decade) was all it took then we could simulate the brain today at a reduced speed and get meaningful output, after all, all you'd have to do is slow down the inputs accordingly.

We're as far away from having a universally teachable computer (not programmable!) as we were in the early 70's when true AI was only about a decade away.

Some interesting reading about the 'state of the art':

http://spectrum.ieee.org/tech-talk/semiconductors/devices/bl...

http://bluebrain.epfl.ch/

http://www.technologyreview.com/biotech/19767/


The "simulate at reduced speed" theory appeals to me, but I think the actual numbers make it implausible. Assuming Moore's Law, 30 years is a 1,000,000 speed up, plus let's say 4.5 years for 3 more doublings, giving us 8,000,000. To simulate 1 second would take 3 months. Debugging would be frustrating. (assuming 35 years from now; seems an arbitrary figure.)


> Assuming Moore's Law

That alone may already be a mistake, it's an observation, not a law after all.

Besides, compared the 35 years ago we can now do things 1,000,000 times faster than back then, but computers are not 1,000,000 times 'smarter' they just give the same answers that you could compute back then but faster and on fewer computers.

The future is parallel anyway, so it isn't Moores law (increase in density of transistors on-chip) per-se that will drive this, more likely there will be a switch to increasing chip packing density with smaller chips (bigger yield) and better communications between the chips (think computing fabric).

We need a huge advance in programming languages before we can really contemplate building an AI by taking advantage of such a structure though, simply simulating the organic soup that forms a brain is going to be a much harder problem computationally and may simulate a dead or an insane brain much more easily than it will simulate a live and thinking one.


This is exactly what I was thinking.

Actually, if that last shift took 35 years, the next one of that magnitude will be even faster.

This is Kurtzweil's fundamental insight: exponential growth is faster than people realize. We consistently underestimate it because our brains are pre-disposed to think linearly.

If it takes you 1 year to solve 1% of a problem, your brain feels like you're 99 years away. In reality, you're only 7 doublings from completion.


Where that falls down is that it fails to consider limits. Exponential growth cannot continue forever. At some point, something will limit it.

It may be a physical limit, a supply-side resource limit, or an economic demand limit, but there will be a limit somewhere.

Without limits, a single bacterium could fill the entire universe in a few years.

Sometimes things do grow like that for a while, but Kurzweil's attempt to turn this into a universal law and neglect limits is hand-wavey and silly.


True. However decades long periods of exponential improvement in technology is the norm, not the exception. The most spectacular example is Moore's law, but it is hardly alone. If you read The Innovator's Dilemma you will find plenty of other examples, ranging from the maximum range of a steam ship to the volume of dirt a backhoe can scoop per hour.

The interesting questions are how much computing power you need to perform equivalent tasks to a human brain, and whether current technology will reach that before it plateau's out.


The idea of "things leveling out" evoke a metaphor of there being some finite amount of resources that can be gotten out of somewhere.

Exponential growth of tools opens up an exponential number of different avenues of exploration - if computers didn't advance at all for ten years, we'd still come up with many more ways to use them. With them advancing exponentially, we can not only find different ways of using them but new fields where different forms of exponential growth can happen. And so-forth. There's no fixed frontier but a moving process.

This isn't saying it's all wonderful but it's all likely to be a bit beyond our ability to encompass it - to draw a circle around it.


You can solve a problem 1% in 1 year if it is a linear problem then you are right. But if the 1% was the easy bit and the 99% remaining are very hard then you still don't have a solution.

If the solution is 99% easy and 1% hard then you may find out after completing 99% of the problem.

Many problems are like that, simulating the brain is an excellent candidate for being such a problem. If it was just a matter of throwing more computer power at it then we'd have solved it years ago, it's that big a prize. But there is still a large part of our understanding missing and understanding does not yield to Moores law.


Intel predicted the end of Moore's Law would come with 16 nanometer manufacturing processes and 5 nanometer gates, due to quantum tunnelling.


well, if you're going to use that as a baseline you should also factor in that the rate of advancement is exponential. so maybe 10?


That probably helps more if the problem complexity is increasing linearly. If it's going up exponentially with problem size, especially if complexity increases at a faster rate than technology, then you're not going to make much progress.

ie, if emulating a human brain is only N times as hard as emulating a flatworm, Moore's Law might do the trick.

But if emulating a human brain is more like (flatworm complexity)^(number of cells in human brain - number of cells in flatworm brain) then Moore's Law is unlikely to help for a very long time indeed.


I think DNA is actually best described as a 1970's era binary in a running mainframe. It rewrites its own source code depending on outside information. And in order to understand what happens you need to look at a core dump that includes all the interesting bit's floating around and attacking/enveloping the DNA.

PS: Don't forget this source code has been hot patched (http://en.wikipedia.org/wiki/Patch_(computing)#Hot_patching) over 3+ Billion years.

Edit: To continue the analogy the boot loader has been lost to time (or sacrificed to free up memory), so getting a working system requires copying not just source code, but also much of the current state of the system.


Not too bad, but sort of like trying to describe an elephant as a bunch of balloons and a water hose glued onto a bail of hay on a table.

I also dislike those sorts of analogies because they are loaded with aesthetic value judgements that do not apply. Programming code that looked like that would be ugly. The genetic system is beautiful and elegant.


> The genetic system is beautiful and elegant.

Only because you don't have to maintain it


That's what's so beautiful and elegant about it. When "executed" it evolves, so it maintains itself.


The better you understand the way DNA works the less elegant it looks. It's a steaming pile of Hacks on top of Hacks and frighteningly buggy.

This encodes protean X, however it only folds up correctly 15% of the time. However, Y bumps things up to 70% and Z get's you to 90%. The other 10% well that depends on the shape some of these are used by Z to do... Why do we know this? Well both Y and Z are defective 2-3 percent of the time resulting in... etc.

PS: Don't get me wrong the happy path works well most of the time and when it fails early it's just a non viable embryo so no problem. However, saying it's elegant is like saying all airplanes in the sky must be easy to maintain because you never see anyone outside fixing them.


What you call a bug, with protein's being defective 2-3% of the time, is the most important feature of the biological program.

It's more elegant thank you think.


Can you elaborate?


Your code would evolve and maintain itself too, if you could afford to wait millions of years and let it randomly break for a large portion of your users.


And by 'large portion' we are referring to 99.999999% of them


"the brain may very well be a quantum computer"

This is the minority view in neuroscience.

> In quantum terms each neuron is an essentially classical object. Consequently quantum noise in the brain is at such a low level that it probably doesn't often alter, except very rarely, the critical mechanistic behaviour of sufficient neurons to cause a decision to be different than we might otherwise expect... —Michael Clive Price


You may be right about that, and I don't think the quantum point is essential to what I wrote above.

But... I do tend to sympathize with the minority view here. The problem is that I don't think the majority are looking at the complete system. Yes, the macroscopic activation and conduction behavior of neurons probably can be modeled classically. But that behavior, as well as things like where the axons and dendrites connect, is governed at the meta level by the genetic regulatory networks and metabolic machinery of neurons and their support cells. All that involves at least thousands of genes and a lot of interactions that may very well extend down to the quantum level.

It's a living cell that grows and changes over time, not a simple gate that can be modeled by an equation. Modeling a neuron like that is like modeling a star as a single point source of light because it looks like that from far away. You can model the way stars look through a telescope like that, but that does not accurately describe what a star is.


"I think it's now possible with a big rack of multi-core machines in an MPI cluster to approach the capabilities of a fruit fly's brain."

This seems so unlikely. Isn't it more likely that it takes a rack of multi-core machines to simulate a fruit fly's brain using our extremely primitive algorithms for approximating intelligence?

In other words, it's a software problem.


You're right.

It's a software problem and a hardware problem. Our hardware is not up to the task, and even if it was we wouldn't know how to program it.

Evolutionary computation and non-von-Neumann architectures such as stochastic data flow architectures might be where to start. We would not write the code. We would build the right kind of architecture and then evolve the code within that architecture.


> We would not write the code. We would build the right kind of architecture and then evolve the code within that architecture.

That sounds more like an approach that I think would produce results.

The key here is 'right' though, what is right?


Look - all the talk of the brain as a "quantum computer" seems to be more full of "bull" than the talk of mind uploading and 700 year lifespans. Quantum interactions happen in all matter that we observe, so they happen in the brain. There is not that much more to it. Neural computation depends on neurons and synapses, communicating with hormones and electrical impulses. If we can model the significant algorithms and feedback loops that determine these interactions, we can model the brain. No quantum computing is needed.

Kurzweil himself would admit that we do not know the details of how all of our technology works. Humans at this point understand very little about atomic physics (we still account for the majority of mass by calling it dark matter and hiding it in a formula constant) yet we can produce atomic explosions. I think that one thing alot of people are missing here is that you only need very little theory before you can apply it. Understanding WHY the theory works is a much harder problem, unfortunately.


As far as I know, Roger Penrose first proposed the idea that quantum effects on neural processes will make computational modeling of said processes impossible. It is highly speculative, and the strongest evidence it's got going for it is that an eminent physicist wrote a book based on it (The Emperor's New Mind.)


The Emperors New Mind is an excellent primer in physics but it failed to sell the quantum idea of the brain in a convincing way.

I expected a rather stronger argument, especially from someone as high up in the hierarchy of science as Roger Penrose, and after laying such a huge foundation.

Maybe there is one but if there is I haven't found it in that book.


The Emperor's New Mind was (IMO) abysmal, and I was really embarrassed to see someone with such indisputable genius as Penrose put out such drivel. You're very right, the strongest evidence for his theory there is that an eminent physicist wrote on it. That's not a good thing.

Please, if you've subjected yourself to that nonsense, cleanse your palette with The Road To Reality - math/physics is where Penrose shines, and that book is him at his brightest.


Penrose certainly popularized the idea. Kurzweil addresses this issue in his book "The Singularity is Near."


http://en.wikipedia.org/wiki/Philosophical_zombie

What if the background stuff that you neglect from your model is where all the interesting stuff happens?

Some people see the philosophical zombie as an argument against consciousness being material at all. I don't really see that, but I do see it as an argument against the idea that you can achieve sentience by emulating a coarse-grained approximation of the brain. I think you would have to either really totally understand the brain or harness evolution and let it build you a sentient being whose structure captures the essence of organic life within whatever its embodiment happens to be. You might end up with something that looks nothing like the brain superficially, but that embodies what the brain does somehow.


What if there is no "really interesting stuff"? What if the stuff in the world was all "regular stuff"?


Could be.

It sounds like I'm making a new-agey argument against reductionism, but I'm not. What I'm arguing against is overzealous reductionism... the idea that you can get a grainy image of something and quantize it and you're done. You might be able to do that in some areas, but in biology you can't get away with that. Very small causes can be just as important or more important in living systems than very large ones.

To avoid the philosophical zombie problem, I think at the very least we need a solid theoretical understanding of the phenomenon that we are trying to capture. We have to know what we are trying to do, otherwise we do not know how to start to go about doing it or whether we have done it or not. If you want to land on the moon you need to know what the moon is, where it is, and where you are in relation to it.

That means that we need:

1) A quantitative, hard, solid definition of life. Right now what we have is a qualitative phenomenological definition of life as it exists on Earth. I'm imagining a definition of life that's as solid as the thermodynamic definition of entropy or enthalpy. Based on what I've read in this area already, I would say that it's almost certain that a definition of life will be stated in terms of thermodynamics. Google Ilya Prigogine and dissipative structures to get started.

2) A definition of some sort for consciousness. Right now we have basically nothing here... not even a qualitative set of criteria like we have for life. We know that we are conscious, and we suspect that at least some other living things are conscious. We do not know whether all living things are conscious or not. We do not know whether the set of all conscious entities is entirely contained within the set of all living entities or whether something could be conscious but not alive.

IMHO dismissing #2 is chickening out. Dismissing #1 is definitely chickening out. Not answering the questions means you don't know where the moon is and you might be landing in New Mexico instead.


DNA is not a program-- it is a molecule, and one that may very well do things at the quantum level that are biologically important.

Sure, but Kurzweil's claim is merely that if DNA can encode everything the brain does with N bits of information, then regardless of how the DNA behaves, we have at least a loose estimate of the level of complexity in the brain. I don't necessarily think that he's right that AI will first be achieved in that manner, but I think I have a bit more faith in human ingenuity than he does, and I think we'll get the software there "by hand" before brain scanning hardware can actually do what he needs it to do.

Of course it's true that the decoding scheme and the dynamics of the end result could provide a whole lot of additional complexity, just like it's possible to build a decompression algorithm that allows us to compress a Linux distribution into 1 byte (put the whole program in the decompressor). But quantum effects or no, it seems extremely unlikely that the physics, chemistry, and biology behind all of this are somehow conspiring, magically tuned to provide an awesomely efficient set of basis functions to encode the algorithms that the brain needs to apply - sure, some parts of the brain's work probably exploit physical coincidences that make the overall job easier, but as far as we know, quantum chemistry doesn't offer a "do intelligence" utility function, that will have to be built up from much smaller sub-units. Far more likely is that the bulk of the brain's function is more or less explicitly coded for somewhere within our DNA.

The brain is not a neural network. It is an interconnected colony of living cells of a variety of types, and it has been shown that all types of cells in the brain are involved in cognition.

"Involved" is a serious weasel word when it comes to the brain, especially when AI is under discussion; everything in your skull is "involved" in cognition, but that doesn't mean it's doing anything particularly important, and it definitely does not mean that every detail of the dynamics is an absolute necessity to obtain Cognition.

But I think we might be arguing different things...

On a related tangent, I've found that many engineers are sympathetic to intelligent design type arguments against evolution. This is because they try to think about biology like engineers and take these machine analogies literally.

[Aside: I don't know too many engineers that buy into intelligent design, but I guess YMMV...]

The mistake that biologists make is that they always assume we're thinking about biology, and that our goal is to understand biological systems.

But when we're talking about AI, the goal is not to understand the brain, it's to figure out how to do something pretty close to what the brain does. Chances are, that's a much simpler goal than figuring out the dirty details about what every cell in the thing does. But it means that we have to be very careful not to get lost in the muck, worrying about the form of the brain's computations rather than the function. This is tricky, because it may be that the way the brain gets things done is not very well abstracted or comprehensible, so it's possible that we'll need to pick the level from which to draw inspiration from the brain very carefully (I think the apparent uselessness of neural nets in strong AI is a good indication that we might need to abstract at some level other than "neuron").

Again, though, Kurzweil has a very specific (and I'd argue, peculiar, at least relative to most people in AI) view on this, and thinks that full scale detailed brain simulations are the way that we need to do this (or at least that this will be the quickest route to the goal). I suspect even he doesn't think we'll simulate the physics of every neuron in detail, though, preferring to instead abstract away the most important bits of functionality.


"DNA can encode everything the brain does with N bits of information"

It doesn't directly encode the structure of the brain. Development is required.

"Involved" is a serious weasel word when it comes to the brain, especially when AI is under discussion; everything in your skull is "involved" in cognition, but that doesn't mean it's doing anything particularly important, and it definitely does not mean that every detail of the dynamics is an absolute necessity to obtain Cognition. But I think we might be arguing different things...

We are arguing different things. My point is that you can't write a little equation for a neuron (which is an entire living cell!) and wave your hands and say "done!"

Kurzweil is way way way too hand-wavey for me.


My point is that you can't write a little equation for a neuron (which is an entire living cell!) and wave your hands and say "done!" Kurzweil is way way way too hand-wavey for me.

Yes, we're in full agreement on this, especially the part about Kurzweil - I don't think he advances the science of this at all, and his poppy PR approach rubs me the wrong way, for sure, as do many of his specific ideas.

In particular, I think it's a terrible idea to hinge our hopes for AI on the idea that we'll be able to carry out a perfect enough simulation of a full brain to call it "done". Not only does it seem to be a pipe dream at present, depending on a whole bunch of technological advances that may not come for a long time, it would be almost completely inextensible and teach us very little - once we had such a simulation running, it's unlikely that we'd understand enough about how to modify it to improve upon it.

I don't quarrel with the idea that DNA encodes the brain extremely indirectly at all, including a substantial development process that depends on a lot of things other than the pure genetic code, and I also agree that the dynamics of a brain are very dependent on the details of all sorts of cells, not just an oversimplified logical representation of them. Where I think we start to diverge is that I have issues with the idea that the particulars of any of those processes can somehow "piggy-back" any non-trivial amount of functionality into the system that's not coming from the genetic code.

Here's the clearest way, I think, to state my information-content claim: if we were to randomly change the details of the way neurons function in detail (still leaving them with the same highest-level capabilities), and randomly change the way brain development happens, and randomly change all steps of entire process that leads from DNA->brain (again, subject to the restriction that the whole thing still works), and even - I daresay - randomly change the laws of physics, then...

...if all of these changes still permitted us to write down any string of DNA (or whatever our new version of DNA was) that ultimately resulted in the growth of an intelligent human brain, the amount of DNA required would, on average, be pretty darn close to the amount that we see used today. And that has implications for other implementations of the same logic, namely that if we could find some way to optimize towards whatever solution evolution has found, we could probably end up with a shorter code for it than evolution has because we can allow all parts of the DNA->brain process to optimize for compression whereas the real physical process is largely frozen against change.

That's the sense in which I think Kurzweil has a reasonable estimate, nothing more, nothing less; where he takes it from there is another story altogether. :)


It doesn't directly encode the structure of the brain. Development is required.

Your Java program doesn't directly execute on the CPU, compilation and interpretation are required.


Totally different thing. The java program encodes each decision and computation that it performs directly.


No a Java program requires an environment to run, so does a brain .

It's just that a brain alters it's behavior and structure as a result of processing inputs (development).

We don't (normally) write Java programs to change their behavior as they process information from their environment during execution.


You're confusing literal translation with development. Here's a very loose conceptual analogy:

Translation: translating Lord of the Rings into Chinese.

Development: hearing a short plot synopsis of Lord of the Rings and writing your own fantasy novel based on the same theme.

Development really does do something analogous to that. It takes a collection of proteins and rules and, through embodiment within the laws of physics, constructs the phenotype. The phenotype contains vastly more information than the genotype, and two identical genotypes will not produce absolutely identical phenotypes.

There is something fundamental about development that we do not understand. This is widely acknowledged in the developmental biology, evo-devo, and evolutionary computation fields. A closer analogy than the loose one above might be some of the behaviors we see with fractals and cellular automata, though development is less deterministic than that.

Evolution and development are somehow related. We don't quite get that either. But both processes add vast amounts of information and both involve adaptation.


Hand waving: the adult brain is the output of running a program stored in DNA for twenty years on a runtime (the physical universe) we can barley understand. To produce a brain from the data stored in DNA, we would need to simulate the runtime environment. To do that without simulating the whole universe, we would need to work out which subset of the universe to simulate. We are far away from knowing what shape that subset takes, let alone what the minimal number of bytes required to simulate it is.


Sure, but Kurzweil's claim is merely that if DNA can encode everything the brain does with N bits of information, then regardless of how the DNA behaves, we have at least a loose estimate of the level of complexity in the brain.

The problem is that DNA is not sufficient. Organisms don't grow from naked DNA in a vacuum; the DNA is always contained within a cell which is enclosed within a more complex bio structure (egg, womb, etc) which may be contained within the parent organism in the case of mammals.

You need all of this information plus knowledge of the various interactions of the different environments to pursue Kurzweil's approach.

The universe of necessary and sufficient information is much, much larger than the 3 billion base pairs in your DNA.


For the moment we seem to have enough trouble modeling (relatively speaking, to the brain) simple proteins reliably.

It's quite a leap of faith to make these statements about the technological future with so little in terms of progress to show in these fields for the last decades. I think the most impressive a-life demos are now almost 10 years old, the best we can really simulate is (drumroll) a cockroach. And personally I think that's a milestone achievement because it means that at least we have a principle that works.

Going through the DNA route to get to a working brain seems to be a very roundabout way of getting there, it will require all of the embryonic mechanisms to be modeled accurately as well as something like the first several years out of the womb before you'd know if you had created something insane or something resembling intelligence.

Assuming you'd recognize it as intelligent even if you succeeded, there may be more ways of being intelligent than we know about.


Absolutely, I never said that I think modeling the brain would be easy (I don't) or that building it up from DNA is anywhere near the best way to attempt AI (IMO it's not).

I simply don't buy the argument that the algorithms of cognition inherit any substantial amount of functionality from their physical implementation, and hence, I see Kurzweil's complexity estimate as somewhat reasonable.


> I simply don't buy the argument that the algorithms of cognition inherit any substantial amount of functionality from their physical implementation

I'm somewhere in the middle on that. I wish for things biological to be clear-cut and deterministic enough that we can fully understand them the way we understand mechanical systems.

But precisely because the brain is encoded in precious little DNA there is some evidence that there is more to it than meets the eye, after all if 50M of gzipped data can encode the whole thing why do we have such a hard time understanding it.

There is enough repetition in there that some died-in-the-wool reverse engineer would have put 2 and 2 together by now if the secret was in the wiring or in some simple algorithm (ANNs for instance).

Apparent order appearing from chaos is a field that has seen some study and the amount of complexity that can arise form simple starting data is quite amazing, witness the mandelbrot set and other fractal forms.

It may be very hard to short-circuit such understanding and to 'divine' the workings of the formula without first going the long way around to understand the whole system rather than the 'seed' from which it grows. This is not simple mathematics where a simple equation on complex numbers gives you the mandelbrot set, it's possibly machinery interpreting an equation with 50 million terms.

In different terms, given a very distorted (dissected) picture of the 3 dimensional mandelbrot set would you be able to figure out the formula that gave rise to it without prior knowledge of the mathematics involved?

http://www.skytopia.com/project/fractal/2mandelbulb.html#epi...


FWIW, I'm not too far away from you on this - I don't think we are going to find a very simple algorithm to do whatever general pattern recognition the brain does. All indications suggest that while there may be such an algorithm programmed in and repeated (a lot more times in humans than other animals), it's far too complex for us to simply read off or guess at.

But it does suggest that we should be considering the functions that such higher level units might have, such that they become more intelligent as we compose them. Easier said than done,of course, especially since it's very difficult to actually observe brain dynamics in any detail.


You need all of this information plus knowledge of the various interactions of the different environments to pursue Kurzweil's approach.

Absolutely, and this is one reason why I think Kurzweil's approach overall is pretty foolish.

But I do think that his information-content claim is within the bounds of reason (I've explained my stance on that many other places in this thread, the gist is that it's highly unlikely that the whole developmental process supplies a huge amount of information "for free", in the same sense that it's unlikely that you'll achieve a 10x compression in code size by using Python instead of Ruby to write a program) and it at least tells us that a solution to the problem of intelligence does exist that doesn't require (for instance) explicit hard-coding of every neural connection inside a brain. In fact, it requires massively less information to specify than that, so there's some hope that in the end we'll be able to come away with reasonable approximations to that algorithm since it must be "fairly simple" (meaning: more complicated than anything we've tackled before, but maybe still within the realm of possibility).

That hints at the fact that AI researchers (at least those that don't buy Kurzweil's ridiculous "simulate-the-whole-thing!" approach) should be looking more into building out small "genomes" into massive structures rather than focusing too much on individual explicitly specified processing networks, because nature is clearly doing something like that, and it seems to work pretty well, letting it achieve a remarkably compact solution, given that evolution tends to produce very bloated code as a rule. There has been tentative research along these lines (http://en.wikipedia.org/wiki/HyperNEAT, for example), but there needs to be more, particularly to figure out what sorts of things we actually want these massive structures to do, what sorts of processes we should be focusing on to control that building process, what types of units we need in order to make the processing that we're doing feasible, etc.


if DNA can encode everything the brain does with N bits of information, then regardless of how the DNA behaves, we have at least a loose estimate of the level of complexity in the brain

Non sequitur.

Solution to unified field theory. That's about 32 bytes of information, uncompressed. There, since that statement is so low in information content, it must be easy to find. We have a loose estimate of it's complexity with my statement, right?


The problem here is that an adult brain has a much higher Kolmogorov complexity than the DNA that codes for its parts.

http://en.wikipedia.org/wiki/Kolmogorov_complexity


Sure, and a shag rug has a much higher Kolmogorov complexity than the instructions for how to weave it. If measured carefully enough you'd find every individual strand of yarn in it has a unique length, orientation, curvature, amount of twist and amount of fraying that can be measured to an arbitrary degree of precision. So if you wanted to reproduce or store a description of that exact rug, you couldn't do it. That doesn't mean you can't make another rug that's just like it or close enough to serve the same purpose and be recognizable as a shag rug.

Like that rug, the brain contains oodles of complexity that doesn't matter at all for our purposes.


There is a huge part missing here though. A rug simply has to 'be', a look alike will suffice. A brain has to work.


The question on the table, I think, is whether a brain that's pretty darn similar to a working one will still work or not - does it just have to have similar statistical properties, or are the explicit details of the connections important, does the chemistry have to be just-so, if we replaced neurons of type X with pseudo-neurons of type Y that do almost the same thing when viewed at scale Z but are completely different underneath, would it still work, etc.?

A lot of these questions are completely unanswered. We do know at least that the brain is extremely resilient to changes in chemistry and can work quite well even in the face of extensive damage, which is an indication that we might be surprised how little of the overall arrangement is actually necessary to keep it working properly.

We need to make sure that we're being careful with our language, too: the complexity necessary to specify any brain is a whole lot lower than the complexity to specify one particular brain. In AI the goal is the former, but for the latter, we really do have to worry about each strand in the carpet. An AI researcher might not give a rat's ass about reproducing your memories, and will be more than happy to construct something that can form any memories; on the other hand, to you, your memories (and the detailed wiring inside your head, which we might be able to alter substantially without "breaking" the brain) are vitally important.


Absolutely true.

That leads to 'seed AI', http://en.wikipedia.org/wiki/Seed_AI when run in reverse, so you'd have to implement the 'minimally viable self improving brain', and take it from there.

And even there maybe not all the strands in the carpet have to be just so, but it may very well be that there have to be certain amounts of each colour in roughly such and such a pattern with interconnects between these larger groups and so on. Some of that information is known but definitely not all of it.

The damage angle is a tricky one, some damage seems to be absolutely not problem at all, even if it is major, in other cases the smallest bit of damage seems to be enough to cause terminal failure. There are a lot of clues in there about the organization of the brain.


if DNA can encode everything the brain does with N bits of information, then regardless of how the DNA behaves, we have at least a loose estimate of the level of complexity in the brain.

No even close. Imagine a kilobyte filled with alternating zeros and ones. We have two commonly used methods for describing the information content of this kilobyte. Shannon information content is closely approximated by today's compressors - and 1k worth of alternating zeros and ones compresses to next to nothing. Kogorov Complexity is the length of the minimum necessary program to produce the 1k worth of zeros - again, a simple loop, incrementing a pointer then dereferencing the pointer to fill the address with alternately a zero or a one suffices - a mere handful of bytes are necessary.

Now imagine that the RAM chip holding the zeros and ones is hit by a bunch of cosmic rays , flipping about 10% of the bits. Both the size of a compressed version of the new kilobyte, and the Kogorov program to generate the new sequence are both going to dramatically increase in size.

Now, the thing is that DNA produces a brain to a fairly homogenous pattern - it's the equivalent of my series of alternating zeros and ones, although admittedly it's a bit more complex than that - this is to be expected, there's more Shannon information in DNA for encoding a brain than there was in my simple pattern of zeros and ones.

The cosmic rays are the equivalent of a brain learning, encoding information by weighting connections between neurons. This process massively increases the amount of information stored in a human brain, but this information must be copied if you actually want the copy to behave like the original. I would expect this amount of information to be several orders of magnitude bigger than that found in the DNA, which is why Kurzweil's claim is just completely off the wall.


We're arguing different things. You're talking about the information content required to reproduce a particular brain, which I agree, has a massive Kolmogorov complexity (even if we decide, as we might want to, to factor out many equivalence classes based on behavior). Far more than a DNA estimate would hint at.

But I'm just talking about (as are most AI people, and I think Kurzweil, as well) the informational content required to build a brain. Any old brain, could be yours, could be mine, is probably neither. The information required to do this is much lower, and corresponds loosely to your first example, a kilobyte with alternating ones and zeros.


I recommend reading "The Singularity Is Near".

Sure, maybe Kurzweil doesn't understand the brain on any deep level and indeed maybe even those who understand the better than him don't understand it well enough at this point.

But Kurzweil's basic argument really isn't about that, it's about the exponential advance of tools and technologies and understanding on multiple level. Will the blue brain project succeed? Will some lesser known project succeed? Will the process take twenty instead of ten years? All unknown but not crucial to the implications of exponential change. When you have tool that are improve exponentially, what you can do tends to improve also. And then the whole process builds on itself. Do I know where this will go? No but I don't you do either.


Your post is remarkably obtuse. By your same logic I can say a program is not executing code, it is a series of electrons moving down different circuit paths. Also true, but such behavior can be described at a higher level as "executing code."

Take another example, your post could be considered a message, or it could just be considered electrons emanating from an LCD monitor and getting received by a brain, which in turn generates certain behaviors in a large meat machine. Also true, but your post can also be described as a message about the need for thinking about reality literally instead of abstractly in order to understand truth.

However, if we can only access truth by talking about the world literally, then no one, ever, has access to truth. Even today we cannot describe the world literally, at all. An electron is merely an abstract representation of even more fundamental quantum physic dynamics. And, who's to claim quantum physics is really at the bottom of the reality stack? For instance, we have no idea why waveforms collapse to their particular determinate. There is something even more fundamental behind this phenomena we currently have no clue about. Therefore, by your criterion for truth, everything discussed and thought about throughout history has been nothing but gibberish, including your own post.

So, clearly you do not even agree with your own claim, since you seem to think you are communicating something to us.

I call BS.


relevant to your comment is POSIWID!: the Purpose Of a System Is What It Does


That makes the term "purpose" rather useless, doesn't it?


If you think about it, that's evolutionary theory in a nutshell. Organisms don't evolve due to any desire or goal to change; rather, random processes cause mutations, and traits that happen to result in more copies of themselves being made tend to become more common, and that's it. There is no purpose to the changes; they just are, and "better" ones happen to become more common because they duplicate more.


I would argue that this just means that some things and processes, like evolution, do not have a purpose, rather than that their purpose is whatever actually happens. If purpose just means result, then you've defined purpose such that it no longer applies to lots of systems we actually already understand the purpose for.


So much nonsense in one place ...

>>Biological systems are nothing like anything we would ever engineer

You really believe, you can predict what human technology will look like in 50, 100, 10000000 years?

>>DNA is not a program

A hard disk platter is not a program.

>>Nature has had billions of years, and it is way ahead of us.

That's why birds are so much faster than planes.

>>Cells are not machines..."stochastic quantum probability field device."

All modern computers are quantum machines.

>>Biology is quantum-scale nanotechnology

And so is today electronics.


>That's why birds are so much faster than planes.

wtf? As if the only goal for flying is to go as fast as possible. A bird can land on a branch that can hardly support its weight while the wind is blowing said branch.


Myers is attacking Kurzweil on the assumption that Kurzweil is actually proposing that the brain's structure should be reverse-engineesed from DNA, which would be intractably hard. Only Kurzweil isn't saying this anywhere. Instead, he seems to be using DNA as a measure for the amount of irreducible complexity that needs to go into a system that will end up with the complexity of a human brain.

Basically, we have no idea where we will get the AI source code we can actually do something with, but we have some reason to believe that the most concise version of the source code won't contain more data than the human genome.

The rough intuition might be if we wanted to simulate brain in the caricatured reverse-engineer-the-DNA way, we'd need an impossible computer that could simulate years worth of exact quantum-level physics in a cubic meter of space, but the human DNA and basic cellular machinery for hosting it would be the only seriously difficult bits to stick in there, the rest would just be simple chemicals and the impossibly detailed physics simulation.

I guess the analogy then is that we make the AI source code (which we don't know how to write yet), which is supposed to end up at most around the length of a human DNA, but which can run sensibly in existing hardware. Then, deterministic processing entirely unlike the immensely difficult-to-compute protein folding from DNA will make this code instantiate a working AI somewhere at the level of a newborn human baby, in the same way as the genome initiates protein-folding based processes that makes a single cell grow into a human baby given little more than nutrients and shelter from the external physical environment.

So it doesn't seem like a really strong statement of overoptimism. It's basically just saying that the human brain doesn't seem to require a mystifyingly immense amount of initial information to form, but instead something that can be quantified and very roughly compared with already existing software projects. I'd still guess it might take a little more than the ten years to come up with any sensible code with hope of growing into an AI though.


The argument Myers is making is that while the DNA might be the input to the system, the total amount of data in the system is that input plus the rules around how that input is interpreted/works. Those rules (for example around protein folding) are currently encoded in biological systems as the laws of physics, more or less, but they're insanely complicated and currently unknown.

So the point is that perhaps if you had a system that simulated all of the laws of physics exactly correctly such that proteins folded and interacted exactly right, only then could you get away with an amount of input equivalent to the amount of information encoded in the part of our DNA related to the brain.

Actually encoding those rules is probably the harder part of the problem, and could easily take several orders of magnitude more work. (10x? 1,000,000x? Who even knows).


1) An estimate of complexity is what it is - an estimate of complexity. It's not a claim that the way to achieve AI is to figure out the details of the particular encoding that nature ended up using, so the precise nature of those rules is not something we care about.

2) While it's true that those runtime rules (which we can kind of consider as the "interpreter" for our DNA) are extremely complex, this has almost zero bearing on the informational content in our DNA that is put towards creating the "intelligence algorithm", whatever that is. Sure, there's probably a bit of extra compression based on the fact that the physics allows some actions to be "built in", but unless you believe that DNA is physically optimized to make intelligent computer construction very concise, the logical content of these computations is probably explicitly "written" in.

And it's hard to believe that DNA is somehow specifically optimized for intelligence, because it was first used in completely unintelligent creatures and appears in exactly the same form now.

Now, it may be the case that DNA's physics are tailor-made to efficiently code for useful physical structures. But intelligence is a level of abstraction above that, and we're all but guaranteed that very little compressibility exists in the "language" for such higher level constructs.

What would an argument be without a strained analogy: if you're writing a complex web application, the size of your application is roughly independent (within an order of magnitude, for sure) of the architecture that it will ultimately run on (where by "size of application" I mean the size of everything that it takes to run it, interpreters, frameworks, etc.). Sure, the binary size might be slightly different depending on whether you're writing it for ARM, PPC, x86, etc., but not hugely different.

We would be extremely surprised if on three platforms your executable weighed in at 10 mb and on a fourth (which had a few different machine level instructions) it compiled down to 10 kb - the only way we could imagine that happening is if someone somehow "cheated" and embedded large parts of your actual application logic into the processor, adding specialized Ruby on Rails instructions to the machine code, or something like that. :)

Encoding and dynamics details may make differences in compressibility, but past an order of magnitude, you're really talking about "cheating", and it's an Occam's Razor problem to assume that nature optimized in such a way for intelligence...


I think the article's original point (and mine as well) was that considering the size of the code (i.e. the DNA) as a measure of the complexity of the task is totally disingenuous when you don't have (to use your analogy) the web server, the libraries you're calling, the parser/compiler/linker for the language, the operating system for the server along with its drivers/TCP stack/etc., the processor it runs on, the mother board, or the storage. In order to turn 10,000 lines of code into a web application, you need millions of lines of code (and Verilog or whathaveyou) in terms of infrastructure.

The problem for AI is not just encoding the DNA, as it were, it's in building all those other pieces around it. Estimating the complexity of building a software brain based on the amount of information in DNA is like estimating the complexity of building a web application using 1950's hardware. "It's only 10,000 lines of code! How hard can that be? All we have to do is write the code, plus the frameworks, programming language, and operating system, plus do all the hardware design."


It's only 10,000 lines of code! How hard can that be? All we have to do is write the code, plus the frameworks, programming language, and operating system, plus do all the hardware design.

Except that DNA doesn't even come close to being a high level language, since the low level details were not specifically designed for compressibility of the code (in fact, the low level details, the "bare metal ops", are pretty much fixed by the for-all-intents-and-purposes random laws of physics, which means we shouldn't assume that they enable any particularly high compressibility ratios for anything).

So a more apt comparison would be if we saw an assembly language program in some strange incomprehensible assembly language and said "It's only 10,000 lines of operations on the bare metal! Now all we have to do is figure out how the hell the system this runs on works, and how we can translate that code into a more sensible (and probably vastly more compact) form."

...which might even be a harder problem, to be fair.

Kurzweil's essentially proposed evidence of existence of an algorithm of length N that does whatever it is we mean by intelligence. Which is fine, and I think is probably correct (IMO, even his estimate about the minimal amount of code it would take is probably too high, though that's another story).

But he's overlooking the fact that the mere existence of such a compact algorithm doesn't help us find it at all, and I think a lot of the complaints others have made about his statements are more aimed at that leap of logic, not the existence claim itself. I completely agree that even brain scanning tech might not help us simulate the important bits very well, even if we did have access to that tech and computers fast enough to run the sims.


Good point. Not only that, but a case can be made that one cannot build such as system as an end stage.

Instead, it appears that ontogeny must recapitulate phylogeny. The system must develop over time as a result of inputs (and the remembered collection of past inputs encoded in the DNA). It would be as if in order to build Twitter with Ruby on Rails, you first had to program a tax calculation application in Cobol on a 1950s mainframe.


So it's a bit like a program, with sequences also selecting a different Turing machine? (which determines how that subsequence is interpreted.)

Because the Turing machine is selected entirely by the sequence (the protein folding caused by the laws of physics is selected entirely by the sequence), the number of possible results (the number of different shapes that could result) is limited to the number of different sequences. That is, the information in the phenome seems to be limited by the information in the genome.

If you think of it as a two part message, with the first part encoding a model, and the second part configuring it, then the DNA can be seen as the configuration, and the laws of physics as the model (which isn't actually coded anywhere like DNA - we'd have to write that ourselves.)

This model is constant over all life, so that DNA from all species (plants and animals) share the same "model" (laws of physics that cause protein folding etc.)

Another example of a two-part message is that the first part is a programming language, and the second part is a program written in that language. For a high level language (esp with libraries), it's obvious that a very short program might do an awful lot; but the true information content is not that program alone, but the total including the language and libraries it uses.

However, and this is my point, I don't believe that the laws of physics have been constructed so conveniently that provide as much assistance as a high level language with libraries. At most, nature may have stumbled onto hacks in physics (like surface tension, interfaces and gradients) and exploited them. Actually, given how long it took to get life started, perhaps it had to find a whole bunch of clever hacks (randomly recombining for billions of years over a whole planet) before it came up with a workable model (that is the model that DNA configure.)

hmmm... we might be able to estimate the information content of the 'model' by how many tries it took to come across it.


> That is, the information in the phenome seems to be limited by the information in the genome.

I think that is a very original use of the word 'limited', limited in this case holds enough room for random chance to come up with human beings.

For all practical purposes that 'limited' might as well be unlimited.


"Finite" would probably have been a better choice.

I have a micro SD card, smaller than my little-finger nail that holds 4GB - x8 more than the human genome (using the article's figure of 4 billion bits). And that's pretty much the lowest capacity you can buy. Yet, that amount of information is limited/finite: the possible states that that memory can hold is limited/finite.

BTW: I found the absurdist levity in the top 10 comments or so of reddit version of this thread a welcome relief - and also some penetrating insights, concisely put: http://www.reddit.com/r/science/comments/d24c8/ray_kurzweil_...


Just because underlying structure is simple, does not implies that the system is predictable. For example consider the three body problem, it has simple structure yet there are known limitations on our ability to predict its configuration at a later time in future.

Just because a system has a finite description does not means we can predict its behavior at later time!!

The systems such as our brains are extremely chaotic, and even if we were to simulate it and write programs to change its behavior by altering the underlying code bit by bit. It would be akin to moving butterfly wings to generate storms.

also analyzing such a large system would be "at least" an NP complete problem, Assuming we can even recognize a solution [compile and run a modified genome] in P.


Thing is, human DNA seems to grow into human babies most of the time, yet at molecular level the environment where it grows is a random, chaotic mess. The rough physical properties, like temperature, and the general chemicals around are stable, but beyond that things are constantly moving and sloshing around in a very unpredictable way.

The idea with the three-body problem is that after some time has passed, we have utterly no idea where the bodies have ended up. Ova don't grow into random jumbles of cells, most of the time they grow into babies.

It takes a lot of very specific information to grow into a baby instead of some entirely different arrangement of proteins. So either the environment needs to be feeding some rather specific controlling information that makes most ova grow into normal babies, or the cellular machinery itself has a system which compensates for external disturbances and constrains the design to mostly what the DNA directs it to be. As far as I understand biology, it's mostly the latter case.


Yes, but when we are talking about "programming brain" we are assuming that changing few base pairs would change the behavior / structure.

     Ova don't grow into random jumbles of cells, most of the time they grow into babies.

Exactly but can we change few base pairs and observe its effect? When we talk about reprogramming, we trying to figure out the cause and effect relationship, if a system is chaotic it is very difficult to figure out that relationship.

Three bodies don't mutate into 4 bodies or weather does not changes into an ice age in an instant, but at the same time we cannot reprogram weather by introducing small changes.


...we'd need an impossible computer that could simulate years worth of exact quantum-level physics in a cubic meter of space...

...it might take a little more than the ten years to come up with any sensible code with hope of growing into an AI though...

Dude, you are all over the map.

... he seems to be using DNA as a measure for the amount of irreducible complexity that needs to go into a system that will end up with the complexity of a human brain.

At best, you could say it's a measure of the amount of irreducible complexity for an encoding of the required proteins. We don't seem to have a measure of the system, by which I mean the thing that models the relationships and interactions of the proteins (and their components) with each other and their environment.


"Instead, he seems to be using DNA as a measure for the amount of irreducible complexity that needs to go into a system that will end up with the complexity of a human brain."

And Myers is saying that DNA is a such a woefully inadequate measurement of complexity that it barely counts as wrong.


  he seems to be using DNA as a measure for the amount of 
  irreducible complexity that needs to go into a system that
  will end up with the complexity of a human brain.
Before an Irreducible Complexity crackpot uses that argument: this complexity is of course not irreducible. It just takes millions of years and the reproduction of the exact mutations and variations in circumstances to reproduce it exactly. A shorter time period, different mutations and circumstances could lead to something akin to this amount of complexity.


Myers is attacking Kurzweil on the assumption that Kurzweil is actually proposing that the brain's structure should be reverse-engineesed from DNA, which would be intractably hard. Only Kurzweil isn't saying this anywhere. Instead, he seems to be using DNA as a measure for the amount of irreducible complexity that needs to go into a system that will end up with the complexity of a human brain.

This is like saying that the underlying complexity of the Mandelbrot is the set of pairs of real numbers.


Since, going by your analogy, we're talking about how hard it is to build a machine that will draw us the Mandelbrot set, isn't that exactly the point?


If, by "draw us the Mandelbrot set" you mean the whole set, and not tiny, low-resolution approximations of the set then yes.


Form my limited understanding, it's simpler than it appears. It's a wire routing problem. Each neuron "knows" where it is going based on the proximity of its fellows, just as other cells know to become a liver based on who their neighbors are. Given a nascent brain, it is trained into an intelligence by external input. Neurons are pruned and connections strengthened.

As I see it, a significant problem is designing a substrate in silicon, or whatever, that has the requisite complexity. I would not be too surprised to find out that the layout program for an AI is not too different in complexity from today's largest software projects.


There's a psychological principle that I've forgotten the name of that observes that by setting the stage correctly you can cause people to accept assumptions without even thinking about it by framing the debate correctly.

This conversation here on HN is a great example of that. Simply by the way the article is written, it is being taken nearly as fact by most participants that a human-scale AI simulation must work by physically simulating the brain. This may ultimately be true but there is no a priori reason to believe it. The brain may implement something that can be simulated "close enough" by a much simpler computation system.

Chaos is chaotic, obviously, but the human brain is a pretty fuzzy system too. It can't be too pathologically chaotic; people speak as if getting the 15th decimal place wrong will blow up the system but the brain simply can not be that sensitive or the removal of a single neuron would break our brains. Our brain state must be at least metastable to work at all. Removing a neuron or getting something wrong in the 15th decimal place may result in some small change of behavior three years later vs. not removing it or getting it right, but our brain states are already so fuzzy and noisy that's not going to be the stopper.

The stopper will be to see whether or not there is a higher-level simulation that can be run that is less complex that simulating the physics entirely. The secondary question is whether we can make something that we would call human-intelligent even if it turns out we can never "upload our brains" without critical data lossage occurring. That would be something as intelligent as us that is nevertheless fundamentally incompatible with human biology, with neither able to simulate or understand the other. I can make coherent arguments either way, as can many people, but by framing the question as physical simulation this has not been one of the more intelligent debates on the topic we've seen here. Physical simulation is one possible path, and not even the most likely or interesting, to AI and brain upload.


No, people are talking that way because it's obvious to the participants that no one knows how to do the higher level simulation. And there is no hope of anyone figuring it out any time soon.

So people figure why not "run the program" that already exists, and that's what this conversation is about.



This rant unfortunately fails to clearly demonstrate how Kurzweil errs. It goes on about complexity and emergence, but why would complex interactions not emerge from a computer simulation just as they do in the real biochemical system?

Nevertheless, my gut feeling, too, is that Kurzweil is mistaken. I can't quite put a finger on it yet, but at least one problem I see is this: Kurzweil seems to suggest that the observation that the genome consists in only 50MB of data (after compression) somehow gives us an upper bound to the complexity of the system. I'd however suspect it rather gives us a lower bound: factor in all the epigenetics, external interactions, the not necessarily simple rule set provided by physical chemistry (this is not in the genome, obviously), etc etc, and the problem may be quite a bit larger.

Take for example the way we currently believe gene transcription promoter networks to work. The combinatorial nature of those interactions means that even though the underlying data is "only" a few megabytes, the system you end up simulating gets very big very quickly.


why would complex interactions not emerge from a computer simulation

One answer is "They will, and surprisingly quickly. But they will be a completely different set of complex interactions than are observed in the real world, because of some roundoff error in the binary representation of the Nth digit of some apparently unimportant constant. Unfortunately, because the system is complex, you'll probably spend the rest of your career trying to track down that error, and fail."

Another answer is: They would, if the simulation was comprehensive enough. Unfortunately, phase space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. Seriously: Your mind reels when confronted with the number of different molecular interactions going on inside the "simplest" single-celled prokaryote, so you abstract it away, almost as a reflex, to stop yourself from going mad. Then you abstract away the first-order abstraction. Then you keep going. Soon you begin to imagine that you can model an entire collection of a trillion organisms, just as a naive programmer imagines that they can rewrite Windows in three days if only they use a powerful enough language. It's a mere matter of programming!


If you wanted an exact simulation, the n-th digit is important. But if you just want some system where some other (possibly intelligent) behaviour emerges, the n-th digit is not important.


Well, it would by all means be awesome if we could build an intelligent system that didn't match the one we already have.

And we even have an existence proof that such a thing is possible, given enough design time. Unfortunately, the existence proof says nothing about the odds of doing so very quickly -- in less than, say, a million years, which is very quick by historical standards.


As for the first answer: we all have to learn to deal with roundoff error. That's not by principle an obstacle.

The other answer contains an implicit assumption that's not obviously correct: you suggest that complexity only arises when you enumerate every possible dimension of phase space. But physical simulations have been very successful at reproducing complex behaviour from simple rules, without taking into account every particle's state vector.

Finally, did you really try to equate my statement to the statement that Windows could be written in three days? ...


No, I'm poking fun at Kurzweil's 2020 deadline, not at you. You appear to have been wise enough not to specify a deadline...

As for this statement:

physical simulations have been very successful at reproducing complex behaviour from simple rules

Absolutely, but it doesn't follow that every complex behavior can be reproduced from simple rules. To overgeneralize from success in one field is the occupational illness of futurists. It's certainly a key problem for Singularitarians, who tend to get so enthusiastic about Moore's Law that they forget that most of the world has nothing to do with microelectronics.


Haha, well I guess the fact that for a moment I felt that was condescending mostly says a lot about me ;)


Biological systems are approximate. I would assume that they can deal with a little rounding error.


At the lowest level biology is physics. And physical systems are not approximate, they are exactly what they are, and a small round-off error can give a huge difference in the specific outcome, even if on a macroscopic level that outcome might be indistinguishable from the 'real' outcome.

So the real question then becomes does biology tolerate working on an approximation of the underlying physics and does that simulated biology still have the ability to exhibit intelligence. I think the first is a maybe, the second a yes but I couldn't give you any reasons why other than that it might be that our biology needs 'its' physics to operate and that probably anything Turing complete has the potential to exhibit intelligence, regardless of whether or not we find a way to achieve that.


This rant unfortunately fails to clearly demonstrate how Kurzweil errs. It goes on about complexity and emergence, but why would complex interactions not emerge from a computer simulation just as they do in the real biochemical system?

Because so far, despite the best efforts of many geniuses and heroic computing resources, our simulations don't even reliably predict the real-world outcomes of far, far simpler systems. Ilya Prigogine won the Nobel Prize in chemistry for demonstrating that sufficiently complex systems display emergent behaviors that can never be entirely predicted by studying their components in isolation: http://en.wikipedia.org/wiki/Ilya_Prigogine#The_End_of_Certa...

Kurzweil is hopelessly out of his depth in these arguments and is talking nonsense. Personally I'll be very surprised if we can construct anything approaching human intelligence in my lifetime.


BTW how do you define human intelligence ? What is it that we wouldn't be able to construct ?

Put in another way, what construct would qualify as approaching human intelligence ?


My definition of human intelligence is intelligence that is broadly adaptable, self-training, and able to communicate via a system of symbols as ambiguous and semantically rich as human language.

For instance, an intelligence that could be taught to read English and then have a reasonable conversation about a contemporary novel, with its own insights into the style and themes of the book, would qualify.

That said, I do expect to see great strides in the sophistication of machines in the next 30 years. They don't have to think like people to be useful.


”It goes on about complexity and emergence, but why would complex interactions not emerge from a computer simulation just as they do in the real biochemical system?“

I think the point he is making is that if your goal is to simulate the human brain you also have to simulate and thus understand all the little details of biology because transistors don’t magically have the same properties as proteins.


Yes, I think that's his point too. But if you believe that the little details of biology emerge from the underlying physics, then maybe you only need to code the fundamental rules and can have all the rest automatically.


I think his point was that we can't code the fundamentals because those require linear time protein folding. And protein folding is one of those hard CS problems.


At this point we don't even have accurate models for most proteins, let alone knowing how to predict what they do.

My current project is a search engine for protein chain geometry. We only have ~20% of the known proteins in our database because the data on the other 80% isn't accurate enough to be useful.


Well that's exactly my criticism: most of the author's objections boil down to "it's hard". Note that I don't disagree with him (nor with you), but this just doesn't help to expose the real issues with Kurzweil's estimates. To "protein folding is difficult" one can always reply "but we'll solve it in the next 10 years" - which I think is what the singularity-folks would say.

My simpler point is that Kurzweil's not taking a useful measure for the size of the system we're solving. (By the way, he plays the same kind of trick on his audience when he's pointing out there are only a few billion neurons in the brain - as if that were the only level of complexity in the brain).


Well that's exactly my criticism: most of the author's objections boil down to "it's hard"

No, common English use of "it's hard" means something completely different from CS "hard". CS hard means NP-complete which to English translates as impossible. Impossible because of well understood mathematical reasons.

Quantum computers may solve it, indeed real life protein folding may have quantum computer-like properties.


Well, this article addresses a specific point which Kurzweil is making and says that what Kurzweil said is simply wrong, not just hard.


Yeah, you "only" need to perfectly simulate the entire universe. No big deal.


I think his point that the genome is not the program but the data was very demonstrative.

If you're a computer guy, you should clearly understand what his fundamental disagreement with Kurzweil is about.


Actually I didn't find that very enlightening at all. Code or data - I'd say it's both, depending on your perspective. You need those instructions to build your proteins, after all. It's very lispy ;)

The point is, he seems to suggest that the genome is all you need, when clearly that's not true.


Another sleight of hand with that statement: Kurzweil's converting compressed data to lines of code, but humans don't write in high-entropy binary. So it's 800MB of code, not 50MB.


Not only that: it may be "only" 800 MB of code, but we have no access to the CPU it runs on.


I took a few stabs at expressing that idea before giving up. I like your attempt better than mine:

Even the 800MB of base pairs may have higher entropy than the machine language we're used to. 2000 lines of lisp or haskell are worlds away from 2000 lines of assembly.


Just look at compressed source code, too.


His last comment,

The media will not end their infatuation with this pseudo-scientific dingbat

chimes with the large majority of bold scientific claims that appear in the press. For example, not that long ago the press jumped on Craig Venter's (http://bit.ly/uEC5) 'artificial cell' (http://bit.ly/c27AL5), hailing it as the beginning of man-made organisms and making bold predictions about the future of life itself, riling up environmental groups no end. (I'm not saying that wasn't a great acheivement. But all his team really did was take out one DNA tape and put back an identical, if newer version. Bread and yoghurt manufacturers have been doing a smaller version of that for a long time. Not exactly playing God.)

It would be nice if there was a scientific-bullshit detector that made sure the press didn't go crazy over wild claims. Proposal for a startup, anyone? :)


The media will not end their infatuation with this pseudo-scientific dingbat

This is one reason that many, many times on HN when there is a link to a blog post about some news story on a science discovery, I post the link to Peter Norvig's article on how to evaluate research,

http://norvig.com/experiment-design.html

as it seems to be that most readers need more practice in critical reading of statements about science. PZ is one of the few bloggers who knows most of those points already, but he frequently writes about other people who forget them, so here in this thread too I'll remind HN readers about Norvig's advice on how to read about science.


@tokenadult that's some good shit. :)

This is slightly off-topic, but here goes anyway. I've long had an issue with the huge disparity between what humanities/arts people (including the large majority of the press) know about science, and what scientists know about the arts and humanities. Most scientists I know are more than able to hold their own in a conversation about, say, a good book, but 99% of everyone else I've ever met doesn't know/want to know the second law of thermodynamics.

I'm not pointing fingers here, I just think there's a serious lack of communication between arts and sciences. I think it's partly this lack of general scientific knowledge that makes the humanities/arts dominated world of the press believe pretty much anything a scientist says. And then, to make a good story into a great one, it's blown out of proportion. Ho hum.


The artificial-cell experiment was pretty interesting. It demonstrated that epigenetic information is not necessary for a viable cell.

More than that, though, it's a "hello, world" — although the cell itself didn't do anything useful, now we have the compiler working, albeit expensively. Now we can do experiments like the following:

- removing introns entirely to see if that damages viability;

- inserting the gene you want at a specific place in the bacterial genome instead of splicing it in at some random place.

Basically, it's a "control group" for a much more precise set of experiments than we've been able to do in the past. It's easy to take the ability to do "hello, world" for granted as a programmer.


It was definitely interesting, but it wasn't the breakthrough it was hyped-up to be. Seeing it as a 'hello world' is a cool idea, though. The hardware of the cell was ready and waiting. So Venter and his team rewrote an old program and gave it a spin. Nice.

This is more likely a lack of understanding on my part, but I'm not sure where epigenetics comes into it? The majority of the cell components - i.e. all the organelles, chemicals, etc - were already present and arranged in the 'surrogate' cell. So all the 'epigenetic' stuff was already in place. But please correct me if I'm wrong.

Not so sure about the random splicing part, either. Food manufacturers have been inserting genes into specific places on bacterial plasmids for a long a time, using directors like codon relationships and ionic interactions. Again - if I'm outta line... :)


Oh, I meant that DNA methylation (resp. phosphorylation, acetylation, etc.) wasn't preserved through the uploading / "DNA printing" process. (And I don't think the cells in question had histones.) You're right that there is potential epigenetic variation in other parts of the cell as well, but I was thinking specifically of DNA methylation.

I don't actually know much about how position-specific current transgenic techniques are, so I could be wrong about that.


I'm not so sure about the simulation, but Ray's right that the brain cannot be more complex than the data that specifies it, speaking information-theoretically. A biologist probably wouldn't get this, because he has too much knowledge of how difficult and complex the actual translation is. It's a mathematical idea, like non-constructive proofs, which freak out sensible folk - and rightfully so. (EDIT no offence intended, they freak me out too)

The only other source of information is the non-genomic environment - extra-nucleur DNA like mitochondria, and the womb (which is arguably already specified in the genome, unless mother nature has done a Ken Thompson http://cm.bell-labs.com/who/ken/trust.html at some point.)

But it's weird to claim that 50% of our genome encodes the brain. Really? Perhaps it's just that 50% is required by the brain, much of it being foundational to the whole organism (like standard libraries.)


> I'm not so sure about the simulation, but Ray's right that the brain cannot be more complex than the data that specifies it

Which would be true if the DNA specification for that particular part of the body was the only part of what specifies the brain. Myers is pointing out that assertion is patently false. The environment of developing creature and the interactions between cell types and their environment (and themselves) is a giant information content multiplier, and the DNA need not explicitly specify any of this information for it to exist and be relevant.

Bringing this to a familiar compsci example; imagine software for creating neural net recognizers. You can look at the source code for a net and say, "This will take N inputs and produce N outputs" You can look at a finished classifier and say, "Ah, I see what this does! It tells the airbags in this car when to deploy!" But that's as far as you can go without the training data that was used to train the classifier. This is a doubly good example because it's often very difficult to determine HOW a complex neural net is doing what it does, but it's fairly easy to explain how to train one to do that task.


Environment being another source of information was mentioned (in my 2nd paragraph), but interactions that are determined by their starting points don't add information - even if the result looks more complex. This idea (and the comment about the mandlebrot set) is similar to "pi holding all the information in the world" (assuming it's normal, meaning not repeating/regular) http://news.ycombinator.com/item?id=1567225

Whereas the training data for a neural net is extra information - but in utero, what is the training data that is not a predictable consequence of the genome? (ex utero there might be an argument, since humans not exposed to language don't develop it; although a group of isolated humans have developed language spontaneously - complex grammatical structures, the whole bit - which makes sense, given the variety of human languages. This supports the idea that language is genetic, or as Pinker provocatively describes it, the language instinct).

EDIT here's a thought experiment to illustrate why predictable interactions don't add information: taking the figure of six billion bits for the whole human genome, this means it can specify 2^6,000,000,000 different genomes (a lot). You can imagine changing one single bit, and all those complex interactions leading to a slightly different human phenome. Most of the possible phenomes wouldn't be a living human, or even anything recognizably human (or living). But the crucial point is that you simply can't specify any other phenomes (apart from those 2^6,000,000,000). You've changed all the bits - what else is there left to change within the genome?


> Whereas the training data for a neural net is extra information - but in utero, what is the training data that is not a predictable consequence of the genome?

For starters, the mother's chemistry which is a function of her DNA and environment. And the mother's physical environment, diet, health, etc. These are things that have no representation in the genome but can radically change brain structure in a developing mammal.

I'm not claiming the environment magnifies existing information, I'm claiming it's part of the total set and Kurzweil (and you) are vastly underestimating the amount of state that is associated with the exact details of a developing organism. This seems to be the thrust of Myers's point (at least in the beginning): you are simplifying and you are not allowed to do that.

Myers then follows that point by saying that even if we do manage to isolate all that information and understand it, we actually don't have certain critical problems like protein folding solved, or even reliably simulated yet.

Even if you hand-wave all this and assume it's possible, the notion that 10 years is the timeframe for this seems... excessively optimistic.


Actually, we're not in agreement.

> vastly underestimating the amount of state that is associated with the exact details of a developing organism.

Sir, kindly indicate where Myers makes that point. I read him as going straight into protein folding, and the complex interactions required for the expression of the genome. (I believe you agree that state in the environment that is caused by the expression of the DNA is not information originating in the environment - ie that this is merely magnifying information, as you put it.)

While there is an incredible amount of state created, in the from of gradients and so on, this is directed by the genome...

Or maybe this is our basic disagreement: do you think that an image of a mandelbrot set creates information as it is generated (and that pi creates information as each digit is calculated), or do you think that the information is defined within the algorithm that calculates it? [there are other issues, but just taking this one alone]

So I'm not 100% sure what you mean; please clarify if I understand you correctly.

While there's information in the environment, there's not very much: consider the white and yoke of a chicken egg. Like letters etched in metal with acid, most of the information is in the placement of the letters; the exact nature of the chemical reaction contributes a very small amount of information. Can you indicate why you think there is a great deal of information originating in the environment?

Yes, the heath of the mother can have an effect, but that's if she is unhealthy, and development does not proceed normally. Assuming she's healthy, the specific condition of the mother doesn't determine whether or not a human being is created. Are you suggesting otherwise?


Looks like I placed too much weight on your parenthetical "and themselves"

> interactions between cell types and their environment (and themselves) is a giant information content multiplier

and we're actually in agreement; my first comment included:

> I'm not so sure about the simulation

> The only other source of information is the non-genomic environment - extra-nucleur DNA like mitochondria, and the womb (which is arguably already specified in the genome, unless mother nature has done a Ken Thompson http://cm.bell-labs.com/who/ken/trust.html at some point.)


Is this sort of like having a pre-processor and system code to get a program loaded, linked and running?


Perhaps, only the pre-processor would be in a feedback loop with the early parts stages of the program life-cycle. Typically the linker and pre-processor are one way operations. In a developing organism, the environment feeds back on the cells developing and triggers new responses which causes new environmental changes which causes new feedback.

Biology is full of bizarre examples of when this process goes awry and our bodies have bizarre features. Dawkins's example of the Laryngeal nerve that makes a crazy loop down into the mammalian chest cavity is the classic example.


Maybe, if you consider the pre-processor to be, well, reality...


The only other source of information is the non-genomic environment - extra-nucleur DNA like mitochondria, and the womb (which is arguably already specified in the genome, unless mother nature has done a Ken Thompson http://cm.bell-labs.com/who/ken/trust.html at some point.)

That just gives you a newborn baby's brain, which is a pretty poor standard for displaying "human" intelligence. If we didn't get any smarter than that, we'd be pretty dumb by animal standards. To get to "human" intelligence, you have to be able to simulate a rich environment for the brain to learn from. You also have to model the growth of the brain and its response to stimulus -- the physics, chemistry, and biochemistry of the brain. DNA doesn't have to do that, because it runs on a platform with that functionality built-in (i.e., the real world.)


Yes, but it gives you the newborn baby's brain in toto, including its ability to grow up into a normal adult human. If you get to newborn's brain you are 99.9% of the way there from the AI side. By the time we get there, feeding it stimulus will be a relatively simple problem by comparison.


Not if the world is actually more complicated than a newborn baby's brain. Can we simulate a baby's interaction with its mother without simulating the mother's brain?


We have a world in hand. We're not trying to build a world simulation, we're trying to build intelligences. By the time we get this far, the infant will probably be embodied, and we can use real "mothers". Some speculate that a non-embodied being can never become intelligent.

Bear in mind we are talking about at least 20 years hence, in my mind.


Yup, wire it up to a baby-shaped I/O package and ask one of your grad students to take it home with her.


> but Ray's right that the brain cannot be more complex than the data that specifies it

Indeed, however due to the Kolmogorov complexity http://en.wikipedia.org/wiki/Kolmogorov_complexity one can argue both for DNA being a resource and, with more common sense, that DNA and all appropriate resources during a lifetime up to a point where you take a measure of a whole brain all funnels into a resource from which you can describe a brain.

And this is only about Kolmogorov Complexity and under assumption that brain is a discrete set. I am not much well read on biology, but I think there were recent advancements where some organisms also showed quantum effects playing a biological role. Even with disregarding our current lack of knowledge we can still argue if brain is a discrete set or not, proving either would be a major contributing factor in our understanding of it I think.


This strikes me as very similar to a Mandelbrot fractal. The fractal is overwhelmingly complex and intricate, but wholly mathematically described by a very simple formula. The complexity of the brain is similar: A few simple cells and proteins can interact in extraordinarily complex ways.

Some biologists understand this just fine, thank you very much.


In fact, some biologists have even written books on the subject:

http://books.google.com/books?id=lZcSpRJz0dgC&lpg=PP1...


But even looking at it in an information-theoretic way, what is being stated is that the instruction set is far more powerful than the binary analogy Kurzweil is using. The instruction I give: "Cure Cancer" has 11 bytes. But that instruction implies a ridiculously complicated set of sequences that we haven't figured out yet.

In this situation, look at the genome as the instruction set for protein construction and folding, an ongoing problem in research we have just begun to investigate. The information contained in the genome is combinatorially descriptive and therefore not as simple as made out to be, if you define information as the amount of "surprise" in the outcome.


I don't think it can be compared to instructions. More like just the startup conditions, like a rom or something. All analogies that compare computers to biological things are terrible. Mine is as much so as any other.


I see it as an upper bound for the Kolmogorov Complexity of the brains structure. I am very dubious that the first successful attempts at doing this will be as efficient as the genetic code though...


You are making the assumption that the information quantity is a good measure of the complexity, which is far from obvious. It is also wrong in general for the genetic code (for example, many organisms have a much bigger genetic code than human, even though they are arguably much simpler - e.g. rice has more encoding genes than human).

Also, having a few set of simple rules is not enough to understand or reproduce a system in general.


but Ray's right that the brain cannot be more complex than the data that specifies it, speaking information-theoretically.

You're conflating two very different concepts of complexity here.

One is the complexity of a static state of information, and the other is the complexity arising from dynamical systems.

As roadnottaken pointed out, fractals are perfect examples of systems that are described by very "simple" formulas, yet contain infinite complexity. It could therefore be said that the simple equation of the Mandelbrot set represents infinite complexity.

However, if you take a particular iteration of the formula, then you can get a finite concept of its complexity, i.e. how many bits it takes to represent the image you're seeing.

So, to say that the "the brain cannot be more complex than the data that specifies it" is true in one sense, but completely useless in another.

To put this into terms of the Mandelbrot, you can gather up all the bits that represents some particular iteration of the Mandelbrot, but that doesn't tell you anything about how it works, or even how to generate the next frame. You need the equation for that.

That's just one place where Ray fails. The second is the lingering question of whether computers in their current state are even capable of "simulating" a brain. It's still not an answered question of what the role of the non-determinism found in Quantum Mechanics plays in the brain and the interactions of various chemicals. It was recently shown that DNA relies on QM entanglement to "hold it together". If it turns out that non-determinism and QM effects play a crucial role in biology (which they almost certainly do), then the very rigid and deterministic system that is the CPU may simply be incapable of simulating a human brain.


The objective is to simulate the real-time activities of the brain, not the evolutionary-time activities that were and are acting on the brain.

Continuing from your extension of the fractal analogy, the former is more like an "iteration of the Mandelbrot" while the latter is "how to generate the next frame".

We do not need to know how our human-precursor brains worked, nor do we need to know what our human-successor brains will be like to successfully simulate current-human-brain intelligence.

It seems plausible to me that we will be able to understand how to simulate the functions of the brain without necessarily simulating the physical universe and its remarkable evolutionary unfolding--which seems to be the ultimate level of complexity and one that I agree is far beyond us.


"he seems to have the tech media convinced that he's a genius"

Looks like he understands the brain just fine, to me...


About Ray Kurzweil ~ "He's actually just another Deepak Chopra for the computer science cognoscenti."


I really understand where this article is coming from. Spend a decade in a biology laboratory working day and night just to figure out a tiny subset of the role of one simple protein, and you'll find that pseudoscientific handwaving no longer sounds like a pleasant breeze, but like the grating of sharpened fingernails on a chalkboard.

(For physicists the equivalent torture is a movie, which goes by the name of "What the Bleep...", that came out a few years ago. OMFG if you want to drive me into a towering rage just show me ten minutes of that film. It's like watching someone make spitballs out of the manuscript of the Eroica symphony.)


Wow - that "What the Bleep..." does look remarkably bad.

The trailer is Poe's Law in action: http://www.youtube.com/watch?v=m7dhztBnpxg

I guess you don't want to watch that again? ;-)


It is deadly dangerous for me to watch a clip from that film. It could inspire dozens of hours of rage-filled but closely-argued physics lectures. Given the slightest provocation, I will go all xkcd.com/386 on its ass and my actual career will die of neglect.

Now, I need to go calm myself by fixing some bugs before I start to throw things. ;)


For what it's worth, I was having a conversation with my son (he's 11) about the key differences between scientific and religious beliefs - I think I'll use that film as an example of how it can sometimes be difficult to tell the difference - especially when people present what are essentially religious beliefs using terminology derived from science.

(NB I remember reading some Erich von Däniken books when I was 9 or 10 and getting awfully excited - I was quite upset when I found that people could just make stuff up and present it as science).


That's the extra-infuriating thing. In theory, I'm completely down with the project of using parables based on science to convey religious ideas. Religion is built out of the raw materials that the world gives you. If we live in a world full of science and technology, we should expect our parables and our myths and our stories to be filled with science and technology.

(Cue a chorus line of Doctor Who cosplayers.)

So, in theory, I could have been okay with What the Bleep. In practice, however, it is just horribly grating -- way more grating than any SF, even the dumbest SF.


"Erich von Däniken"

The Nova debunking of Chariots of the Gods was epic, and a important lesson to me as a high school student not to take scientific sounding arguments at face value. Also, an important lesson in not underestimating human intelligence and creativity. Most of the debunking was simply figuring out how ancient peoples did things we now think are impossible without modern technology.


Turn that rage into a popular webcomic, then you could start a new career!


I keep meaning to practice some drawing. I'm half serious about that.

One of my personal heroes is this guy:

http://www.timhunkin.com/

That Randall Munroe is also a personal hero probably goes without saying at this point.


My programmer's understanding of the argument is this:

Even if you could reduce the brain to some sort of bytecode, an interpreter is still necessary to run that bytecode. For instance, a python program might be a few bytes, but the interpreter is still a few megabytes. Yet both are necessary to run the program. Who knows how large a brain bytecode interpreter is going to be, but probably very large.


The interpreter for DNA is a living cell.

There are two problems with this argument:

1. For the interpreter to substantially reduce the size of the DNA needed to encode a program to build an intelligent system, that interpreter needs to be optimized to reduce the complexity of such programs. However, as far as we can tell, ribosomes and protein folding work exactly the same way in nearly all living cells, from snottites to kelp. It isn't plausible to suggest that each Salmonella cell contains hundreds of megabytes of information that's optimized for producing intelligent systems. Even if Salmonella contains hundreds of megabytes of information, it would amount to a proof of creationism if that information were optimized to simplify the expression of brain designs. So the size of the interpreter is irrelevant.

2. DNA is not just interpreted; it's compiled into cells. The DNA of a cell contains a complete program for making every peptide that is necessary to the cell from individual amino acids, and those peptides together construct all the other chemicals from a small number of simple molecules found in the environment. So the source code for the "interpreter", down to the hardware level, is actually already present in the DNA.


But you don't need to write the whole interpreter to understand how that small python program works.


Assuming you have the documentation, or assuming that you can extrapolate from a program in a language that you already know.

If one knows python and is given a program in APL, then likely an insurmountable barrier has been reached. If you don't have the docs the describe the language, the one can try to infer the language by running experiments on variations of the stored program. However one needs access to the processor in order to run different experiments and get different results to be able understand how the programming language works.

We don't have CPU in a form that we can experiment with ("brains in a vat"). We have a 50Mb string in APL*2, a mostly unknown language for a mostly unknown processor.

The other part is that this is not a program but a meta-program -- meaning there are multiple levels of indirection. The DNA does not directly specify the brain, but instead specifies rules for components that would eventually arrive at an assembly (guided by a rich context of voluminous other inputs over an extended period of time) that constitutes a brain.


Hard to take this criticism seriously when it starts off with so many ad hominems.

I haven't read Kurzweil's specific claims, but I'd guess he's claiming we can simulate brain function by 2020. We can already simulate simple organisms and neural networks. You can already accomplish quite a bit of complex emergent behavior with those.

Simulating the entire brain would require an enormous amount of processing (though surely feasible some day, if not 10 years, how about 20), but most likely we'd make some trade-offs and sacrifices and still get a close approximation (like we do with virtually every simulation).

Of course simulating a brain and simulating a human are not the same thing. You can't really avoid having to simulate the entire body and its interactions with the environment.

And of course we wouldn't suddenly "understand" the brain just by simulating it. It would still be the same complex system, it's just we'd be able to inspect it more closely. Such a simulation would at least help us understand a good bit more about the brain's role in our cognition and actions.

And yeah, who knows, maybe in 20 years we could have some kick-ass AI in counterstrike.


Kurzweil is a brilliant scientist. Calling him a Deepak Chopra is just stupid. He's out of his depth is all.

Also, I wouldn't bet against him building an AI that for all intents and purposes appears to be conscious. Full-human simulation is another thing, but with that we are all out of our depth, and it's pretty likely that a conscious computer would be quite capable of understanding it better than us.


"Calling him a Deepak Chopra is just stupid"

That's not much of an argument. He plays on the standard things that all religions play on: the promise of some form of immortality, which instinctively plays on the fear of death; claims that the near future will be radically different than the present; etc. His extrapolations of exponential growth are not entirely unfounded but he applies the same logic to everything that suits his fancy without taking into account that science and technology have made large but halting steps forward throughout history, and the fundamentals of scientific insight and discovery haven't changed. He talks and sounds like an evangelical to me. No doubt he is a brilliant scientist but that doesn't warrant the cult of personality that has grown up around him, which I perceive as a dangerous thing contrary to the aims of science.


I agree that cults of personality are bad things. But dismissing someone as an evangelist and overrated isn't much of an argument either. It trivializes his very real and important accomplishments, and conveniently allows you to dismiss all of his ideas without looking at them on their individual merits. In short, ad hominem, and unwarranted.

Kurzweil has several functioning products that would have fallen into the mess you dismiss as nonsense two decades ago. Is he wrong on a lot of things? Yes. But so is any other engineer looking for things that haven't been done before. Even if Kurzweil has a spell where he does nothing of any merit for a decade (which has yet to happen) I'll still confidently say you're foolish to rule him out as a crank. He's earned his right to dream aloud.


Actually the information needed for replicating brain is not just limited to the information encoded in DNA. Physics plays a great deal of role in all molecular biology and from physical interactions arise cell-cell interactions which then give rise to complicated organs that you have in the body. (And, of course, all these arose during billions of years of evolution which is encoded not only in DNA but environment also.)

So if you were to simulate the brain, modeling DNA will be A tiny part of it. Encoding the deepest of physics world (yes, quantum effects play many roles in DNA) AND then having enough computational power to model interactions of those particles in real time IS A REALLY BIG DEAL.


> The design of the brain is in the genome.

Given how much computation takes to "decompress" information about what a protein is built of into 3d layout of that protein (vide folding@home), statement that you could "decode" half of the genome into working simulated system is bold to say the least.

Trouble with simulating human 1 to 1 is not with how complex human itself is but with how complex, bizzare and computationally powerful is physical hardware on which program "be human" runs.


I too have my doubts about reverse engineering the brain. As I see it regardless of whether the Singularity arises or not this century will achieve a rate of progress which is incomparable to anything we have ever seen before. To get this in perspective just consider that in the nineteenth century more technological breakthroughs were made than in all of the nine centuries preceding it. Then in the first twenty years of the twentieth century, we saw more advancement than in all of the nineteenth century combined. In this century we will achieve 1000 times more than we achieved in the whole 20th century which was itself a period of progress never before seen. Whether we reverse engineer the brain or not the merging of human and machine intelligence is an inevitability because you only need to look at how attached we are to our iPhones and Blackberry’s to realise that we will ultimately be unable to resist moving increased processing capability directly inside the body. The consequence of that is massively increased capability whatever way you look at it.


Kurzweil is reported as equating 25MB of compressed DNA to 1 million lines of source code, which seems several times too low: if I compress the comment-stripped source to one of my own projects, it compresses down to 8 bytes per line of code as defined by sloccount. If we suppose compressed DNA has similar real info rate to my gzip-compressed source, then 25MB corresponds to 3 million SLOC, not 1. If you include blank lines and comments, then more like 5 million.

For another take, http://www.dwheeler.com/essays/linux-kernel-cost.html puts the Linux kernel source at 4 million lines. Can you compile a 25MB kernel? Compressed? (How much of it is 'introns', anyway? This might not be the best example.)

(Not going to touch the argument about applicability. It does seem to me that whenever Kurzweil glosses over details, the closer look always appears less utopian. There are other writers about these ideas who I can read without having to check every assertion, like Anders Sandberg.)


Myers is wrong, because he overcomplicates. Kurzweil is also wrong, because he oversimplifies (I'm taking Myers's version of argument at face value; I actually suppose the argument was more sophisticated than that).

Firstly where Myers is wrong: the human brain ultimately comes forth from a bunch of information roughly equal to 1 million lines of code. If you could reproduce those 1 million lines and set them loose, allowing them to construct a human being (nothing else: what else could they construct?) and letting that human being live in our world, you would have 'created' intelligence. It's as simple as that and attacking that abstraction is completely the wrong approach to pointing out the problem with Kurzweil's argument.

So, then the two points where Kurzweil is wrong:

1) it's not just any million lines of code. It has to be exactly the million lines of code in our genome, give or take some bits. Considering the enormously complex interactions between these bits of code, this is worse than reverse engineering the largest codebase of spaghetti code you could possiblt imagine. The simple example that Myers gives is enough to show this.

2) The million lines of code cannot just be executed anywhere. It encodes for the construction of a human from raw material and the subsequent operation of that human. Give it different materials or a different living environment and something entirely different, in most variations nothing remotely capable of 'life', appears. And even if you could make it build something from electronic components: slight differences in the perceptive systems can create huge differences in the brain and the concepts in the brain. A machine with a finite pixelized array of visual light receptors would build a completely different conceptual model of the world. Reverse engineering the genome is not only extremely hard: it is very unlikely to produce the result you want.


Myers isn't trying to argue that the million lines of code aren't there or aren't important. He's trying to argue that those million lines of code have to run on hardware which is poorly understood at best, and that trying to reverse-engineer the hardware is several orders of magnitude more difficult than trying to understand the original programming. The hardware components don't follow the law of superposition, either, so we can't just look at each element in isolation. The entire thing has to be understood.

Imagine going back to ancient Roman/Greek/Etruscan/&c times and handing them a ream of paper filled with the hexadecimal representation of an x86 application compiled for a Windows environment, and then showed them what it looked like when running. "Hey, look, now you can play videos and music!" Now imagine it was several orders of magnitude more difficult than that, and you're beginning to get the idea.

"If you could reproduce those 1 million lines and set them loose, allowing them to construct a human being..." The exact point he was making was that there's a lot of handwaved complexity in this statement, and that the abstraction "understand human being program, run code" is abstract to the point where it no longer accurately reflects the reality of the situation.


  He's trying to argue that those million lines of code have
  to run on hardware which is poorly understood at best
He may be trying to argue that, but that isn't the central thesis of what he is arguing. His central thesis is that

  [The brain's] design is not encoded in the genome
  
and that's just patently false. There's only one single blueprint for the brain and that's the genome. That the ways in which the blueprint is being read, interpreted and carried out are not in the genome does not detract from the fact that the genome is the blueprint. His arguments support the thesis that a blueprint is not enough. The objections to Kurzweil that I list are a summary/rephrasing of the arguments Myers provides. But he starts out by saying "No, that isn't the blueprint" and none of the arguments support that thesis.


The main thing Kurzweil has missed is the even though we something like the 'source code' for a human brain, we don't have the compiler.

If you're taking the (pretty weak) 'source code' analogy further, the compiler is... A host human embryonic cell, running on a womb in a mother.

So thinking about the problem that way is a non-starter for obvious chicken/egg reasons.


Just from the description in the Wikipedia this is how I read Ray.

He is excellent in marketing, in sales, in networking & as an overall promoter. All qualities we hackers need to cultivate as long as we control ourselves & avoid pitching any vapor-ware.

Some examples of him promoting his product or himself. 1. At 17, appeared on the CBS television program I've Got a Secret - showed off software that composed piano music 2. International Science Fair, first prize for the same. 3. Recognized by the Westinghouse Talent Search 4. Personally congratulated by President Lyndon B. Johnson during a White House ceremony. ... Incredibly, it goes on and on. Very fascinating, if read with the heart of an entrepreneur. His whole life seems to be a chain of fantastic promotions.


The article has a point that Ray Kurzweil's claims are nonsense, but lets not focus on his claims on genome-to-protein-to-brain cells mechanism. What's rather more important is how the brain does what it does and how that compares with today's computers.

There are 50 to 100 billion neurons in the human brain and the power of the brain comes from the fact that you can create many orders of magnitude more neural circuit combinations with those neurons. Each cell may be part of many circuits, and learning involves the forming of these circuits. Now, lets compare that phenomenal power with the power of the computer. It becomes especially laughable when you say its 50MB worth of information.


Incidentally, Larry Page used the exact same argument back in 2007:

My theory is that, if you look in your own programming, your DNA, it’s about 600 Megabytes compressed… so it’s smaller than any modern operating system. Smaller than Linux, or Windows, or anything like that, your whole operating system. That includes booting up your brain, right, by definition. And so, your program algorithms probably aren’t that complicated, it’s probably more about the overall computation. That’s my guess.

(http://pimm.wordpress.com/2007/02/20/googles-larry-page-at-t...)


If you want to read some hard science fiction about how AI might be developed and how an AI might think, I'd highly recommend the in-progress series Life Artificial: http://lifeartificial.com/


This is like saying that the entire universe can be simulated by using a fundamental theory of physics as source code.

It's (to some extent) true, and potentially interesting philosophically -- but completely meaningless from an engineering perspective.



Hmmm. Sejnowski says he agrees with Kurzweil's assessment that about a million lines of code may be enough to simulate the human brain. A million lines of code is hardly enough to run a car, for goodness sake.


I don't find these criticisms convincing. We haven't solved the protein folding problem? So solve it. Is there some reason to believe it won't ever be solved? If you had a sufficiently accurate simulation of a cell or organism (which we don't, but we might someday), you could do experiments on it much cheaper and faster than in a lab.

I do agree that there's no freakin' way this will be done in ten years, or in the 62-year-old Ray Kurzweil's lifetime, or mine.


There is Nobel-prize winning theory that shows that you won't ever truly solve this problem by studying the underlying atomic physics:

http://en.wikipedia.org/wiki/Ilya_Prigogine#Dissipative_stru...


I fail to see the connection.

No, a computer simulation can never predict exactly what a physical system will do, even in the case of a single particle, due to quantum uncertainty. But so what? If I took out one of your neurons and replaced it with an identical neuron, that new neuron wouldn't do exactly the same thing, again due to quantum uncertainty; nonetheless, its long term behavior would be essentially identical and you, as a person, would be no different.

That is, neurons and brains are classical objects, essentially immune to the underlying uncertainty they're built on.


That is, neurons and brains are classical objects, essentially immune to the underlying uncertainty they're built on.

Absolutely not. Prigogine's work demonstrates that systems far from thermodynamic equilibrium (of which all living systems are an example) are intractably non-deterministic. The issue isn't the underlying quantum uncertainties, it's the macro-uncertainties of the higher-level system.

In other words, you won't predict the behavior a neuron by modeling the underlying physics. You have to learn to model the macro behavior in a statistical way.


I didn't mean to suggest that quantum effects were the only source of uncertainty.

If I've got a cubic meter of pure water, I can slosh it around and observe all sorts of interesting effects. I can then model that cubic meter of water with another cubic meter. That second cube won't behave identically. A cubic meter of water has a very high Reynolds number and can have considerable chaotic turbulence (chaotic in the classical sense, not quantum). The exact motion of the water simply won't be the same, no matter how precisely you mimic the 'input' into the system (forced motion of the cube, for instance).

Nonetheless, the second cube is a fantastic way to understand the first cube, and in some way is qualitatively identical, even when the specific motions aren't replicated exactly. This is exactly the same for computational simulations of the fluid. Of course they can't predict chaotic behavior, but for all intents and purposes they can be just as useful as having that second cube of water.

Likewise, a computer simulation of a neuron may never exactly predict what a real neuron will do. Just like one neuron can never exactly predict what another neuron will do. Just like one bucket of water can never exactly mimic another. But who cares?


You're comparing two very different things here though. One bucket of water is substantially like another just as one brain is like another. A brain and a computer model of a brain are two very, very different things, with completely different meta-properties.


Yes, but what I'm pointing out is that what you are complaining computers fail to have, other physical systems also fail to have! Even a second bucket of water can't predict the first bucket of water! Thus, the fact that a computer can't either is sort of insubstantial to the question of a simulation's utility, no?


The essential point is that you can't model the behavior of a non-equilibrium system by modeling its constituent elements, which is what Kurzweil claims.

That no two non-equilibrium systems are exactly alike seems to me a different question.


I do computational fluid dynamics for a living. I assure you, we model non-equilibrium systems all the time.


You should have the math chops to read the papers then, which make the point better than I can.


I'm ignorant of this theory, but reading the link I don't see that it shows that protein folding can't be modeled.


You'd have to read the papers cited below to get the meat of the theory but the gist of it is that you can't model proteins by looking one level down, i.e. by studying only the properties of the atoms that make them up. Proteins have emergent behaviors not entirely predictable from the behaviors of their constituent atoms.

Many still believe in the reductionist idea that a perfect understanding of physics would lead to a perfect understanding of chemistry and then biology. This is not the case.


Proteins have emergent behaviors not entirely predictable from the behaviors of their constituent atoms.

Define "emergent". I can believe that a protein's behavior is extremely sensitive to the initial configuration of its atoms and that as a practical matter we can't (currently?) get detailed enough measurements to predict exactly what's going to happen. But without exceptionally compelling evidence I'm not going to believe that there are different physical laws for proteins than for their atoms.


I suggest you read Prigogine's book if you're really interested but his work demonstrates that the problem isn't having detailed enough information about state, it's the irreversibility of time, which makes physics fundamentally non-deterministic.


What has irreversibility got to do with it? There are cellular automata that are irreversible but obviously deterministic.


It has to do with entropy and thermodynamics and the fact that living systems are so far removed from thermodynamic equilibrium that they generate unpredictable, emergent behaviors.

The math behind is pretty gnarly but if you want to understand it I recommend his book: http://www.amazon.com/End-Certainty-Ilya-Prigogine/dp/068483... .

A CA is not a good model for this.


I'm out of my depth here, but if protein folding is intractable like the halting problem or the traveling salesman problem, as you're making it seem, I find it odd that I haven't read that elsewhere.


The physics can be modeled, but according to this article (http://techglimpse.com/index.php/ibms-blue-gene-exploring-pr...) a 1 petaflops machine is estimated to take 3 years to crunch through 100 microseconds of simulated time. And that's just for a single protein interacting with itself and a bit of water.


That might just mean they haven't found the right algorithm yet.


Most people suspect it to be NP-hard or NP-complete.


Protein folding is suspected to be an NP-complete problem <http://www.liebertonline.com/doi/abs/10.1089/cmb.1998.5.27 >, so it seems to me that having reason to believe it will someday be solved equals having reason to believe that someday NP-complete problems can be computed in polynomial time. Alternatively, you might hope for a domain-specific approximative algorithm, but this seems tricky, too, since mis-folded proteins can end up as really bad stuff, e.g. prions <http://en.wikipedia.org/wiki/Prion >.


Saying it's NP-complete doesn't prove we can't solve it well enough for simulation. As a physical problem become "pathologically NP", Nature becomes non-deterministic too; it seems unlikely that Nature will make extensive use of a protein that doesn't reliably fold in a certain way, or some small constrained set of ways.

See http://www.scottaaronson.com/papers/npcomplete.pdf


Not that I wouldn't like to be as optimistic, too, but Nature does seem to have made enough use of unreliably folding proteins to have come up with BSE, Parkinson's disease, and some others <http://en.wikipedia.org/wiki/Protein_folding#Incorrect_prote... >.

I'm a big fan of Scott Aaronson, too, btw :-) Here's a pic of him demoing the soap bubbles experiment he refers to in that paper you linked to <http://www.scottaaronson.com/soapbubble.jpg >.


I thought of mentioning that, actually, but figured it would be extraneous. However, if they are all indeed pathological then perhaps we can do our simulated humans a favor and not simulate the pathological path.

Still, as discussed in another comment I made, I seriously, seriously doubt that we will ever simulate any sort of intelligence by raw physical simulation. It just isn't feasible with any realistic computational technique.


The article says that the HP model is NP-complete. But water folds protein very quickly. I may be misunderstanding things here, but if protein folding (as opposed to a particular model of protein folding) is NP-complete, shouldn't it be slow in real life too?

I recall reading an interview with someone who founded a company that builds computers for simulating biological systems in silico who thought that there were much better algorithms waiting to be discovered because nature can do it quickly. I can't find it now.


>We haven't solved the protein folding problem? So solve it.

It's not a trivial problem. There are lots of bright minds working on this problem. If you understand the difficulty behind it, you wouldn't make such ignorant statements.


Where did I say it was a trivial problem? I was assuming (perhaps incorrectly according to other commenters) that it's a tractable problem that may someday be solved, like putting a man on the moon was.


> We haven't solved the protein folding problem? So solve it.

You sir, are a genius. To make up for my previous lack of initiative, I will do so immediately. Please arrange for the world to be ready for my announcement of the solution at noon tomorrow.


Don't even solve it. Just put it up for bid on elance, along with the P-NP problem.


I'm not a genius, and you probably aren't either, but somebody probably is.

As far as I know there is no fundamental reason to assume that this problem won't be solved. It's not the halting problem.


PZ Myers does not understand Ray Kurzweil.


This is what I got from it too. Ray is not an idiot.


Ultimately the criticism of hard AI that sticks is that its enthusiasts are using a faulty "brain as computer" metaphor as the basis for their theories. The brain and digital computers do NOT work the same (the brain is not digital) and assuming they are, and that therefore clever-enough computer models can replicate the brain, seems to put very intelligent / brilliant CS people at risk of banging their head against a wall for decades.

More generally its fun to imagine things like this but also good to realize real world limits as we live in a finite world with limitations such as living 700 years not being at all likely (or desirable, IMHO).


I heard an interview with him, and it wasn't very insightful. He says that everything will get smaller and faster with varying detail or on specific subjects. That's not very insightful.


I've heard him give the same talk at several different conferences, several of them unrelated (Entrepreneurship conferences, Blackberry conferences, etc.) Talk about exponential progress, Singularity U, mind uploading, and his belief he'll survive to be 700 years old. He's a bag of fantastic claims and an excellent speaker, and it embodies a lot of the best hopes of technology of the present age, as well as our collective insecurity that we've come so far and yet seem to have improved on the world so little.

But it's grating to see that he's getting so much media attention for such blatant disregard of the human condition. No consideration what a post-sentient-AI world would accomplish. No regard for what happens when people live to be 700 years old and population growth doesn't slow. His ideas are like genetically engineering society with no regard to the collateral damage it'd cause to the societal environment. I don't dislike Kurzweil for being an optimist, I dislike him for being arrogant.


> and population growth doesn't slow.

Population growth is slowing. Dramatically. There's tons of evidence that as women are educated, they will have fewer children. And women are starting to get better educations all around the world. There's absolutely no doubt that population growth will slow (slash, is already slowing down) a bunch in the coming decades.

Second, I'm confused about the 700 number. Where does that come from? Does he think that in the next 30 years, we will be able to extend life so much that he can keep on living? Why does that stop at 700?! Doesn't he think that some time in the next 700 years we'd be able to find a way to live longer than 700?

Frankly, I doubt there's much difference between finding a way to live to 700 and finding a way to live until the end of time.


Maybe he's planning to start bull fighting at the age of 690 and base jumping at the age of 695.


As you decrease your chances of dying from old age, disease, etc., then injury will become the limiting factor on longevity. Even someone who never ages and never becomes sick will have trouble surviving a fall off a cliff, or being shot in the head.


He can't be very insightful. To be that, he would need to know a lot more about human physiology and if he knew that, he could not be as optimistic about his own chances of living to see the singularity. That is why he can not be truly insightful.


We eventually learned to fly, too... but not by sticking feathers up our butts, as Kurzweil is advocating.


Everyone is an expert on AI and the human brain today I see.


Synopsis: Kurzweil is underestimating the brain. Myers is underestimating technology.


Kurzweil has written enough by now that anyone hearing his interviews should be able to put his ideas in the context of his predictions about the Singularity. I have always thought Kurzweil hedges his bets quite well, and he makes it quite clear that the capabilities about which he writes depend on future gains in computing power. Kurzweil is only human, of course, and it is well known that he is trying to extend his own life so that he will be here for the Singularity. He very much wants the things he predicts to happen in his lifetime, but I don't hear or read him as asserting that that will absolutely be the case. Myers' unnecessary "refutation" (and insulting) of Kurzweil reminds me of the reactions to the Segway. Kamen predicted that it would revolutionize transportation; naysayers then piled on, ridiculing him and saying that the Segway wouldn't work for commuting long distances on American super highways. Well, no, but that's not what he meant by revolutionizing transportation. I'm sure Myers is probably good at what he does, and he is very much concerned with the now and the practical. We need people like him, and we also need visionaries like Kurzweil and Kamen who look ahead decades or centuries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: