I see a big parallel with the predictions for advances in neuroscience with all the predictions that were made prior to the sequencing of the human genome (the author touches on this a bit too). Lots of smart scientists really believed that once the human genome was sequenced, we would have the keys to the biological kingdom. What has actually happened is that we have discovered that the complexity of the system is probably an order of magnitude more complex than previously thought. Knowing the sequence of a gene turns out to be important, but a pretty minor factor in explaining its function. Plus we are learning that all sorts of simple rules we thought were true aren't always the case.
I suspect a similar thing is playing out in neuroscience. As we peel back the layers of the onion, ever more complexity will be revealed. The things Ray Kurzweil predicts may well come true. He is a brilliant guy. But the timetable is very optimistic.
The march of biological progress is very slow, in part because all the experimentation involves living things that grow, die, get contaminated, run away, don't show up for appointments, get high, etc... Lots of people from other scientific disciplines, especially engineering related ones underestimate just how long even the simplest biological experiments can take.
"Lots of smart scientists really believed that once the human genome was sequenced, we would have the keys to the biological kingdom."
Here's my (a computer scientist's) view on the matter:
Imagine that you have a relatively complex computer system written with object oriented principles. Now, imagine that you are looking at the binary representation of this system and trying to make sense out of the whole thing. Also, imagine that you have no knowledge of how computer systems work and how the layers between the computer program, programming language, possibly virtual machine, and native code work.
There are layers involved between these objects and their binary representation. I imagine that there are also layers between our genome (analogy to binary code) and the leveraged representation of ourselves (analogy to object oriented system).
I think that this is why it is hard to make much sense out of the genome, even though the human genome was sequenced.
I also imagine that this is why it is hard to make sense out of the brain by looking at the brain directly. An analogy would be that we are again looking at binary representation of information.
It would be far more useful to figure out how this stuff works. I am not sure how this is done at this time.
Your example is a good one, but as others point out, the computer has been designed logically. Biology hasn't been. And even then, there's just really oddball crap that comes out of left field. I'll give you a good example (which admittedly may require a few trips to wikipedia depending on your biology background):
The codon used for translating DNA/RNA code to protein is well established. It's a three-base degenerate code, meaning that there are several three-base DNA sequences representing a given amino acid [1]. This code is very well understood. If your DNA/RNA sequence has any of the three bases combos for alanine, your protein gets an alanine in that position. It follows from this that different DNA sequences can code for exactly the same amino acid sequence in a protein. However, proteins with the same amino acid sequence are chemically and biologicalally identical (ignoring things like post-translational modification).
A few years ago, I read a paper [2] where the group hypothesized that in a specific case, a rare codon for for an amino acid in a specific protein caused the cellular machinery to stall at that position. They suggested that in the intervening time, the protein misfolded into a different 3D shape. The resulting protein therefore had different chemical properties despite having identical amino acid sequence. Basically shredding what is often known as the "central dogmal of molecular biology".
Now, this specific example probably needs to be confirmed, and might not be very frequent. But it makes total sense when you understand how all the pieces work. However this explanation would be very low on most biologist's lists of reasons why a certain protein isn't functioning properly. In fact, when people do genetic analysis looking for diseases, they routinely throw out all synonymous changes before doing the stats. It makes you wonder how often we miss this when looking for disease genes.
My larger point is that lots of biological science is a collection of edge cases. We know so little about the systems we're studying and have such crude tools to investigate them, that we get blindsided by things sitting in plain sight all the time.
"Your example is a good one, but as others point out, the computer has been designed logically. Biology hasn't been."
I'll do a little play on words here. Are you saying that biology isn't logical? Does it defy the laws of the universe, physics, and the axioms in math? Certainly not. I think what you are saying is that it isn't the same as a silicon chip and follows different rules. We just don't know enough about biology to peel away the layers, but there are definitely layers. There has to be at least a one-to-one mapping between the genome and a live being, but I would bet that the layers are far more complicated.
I read a study a while back which monitors the different areas developed inside a mouse's brain. The study concluded that a certain part of the brain gets more developed as the mouse tries to run threw a maze vs a mouse that does nothing at all. I gather that the study is trying to point out which area of the brain is responsible for memory, and perhaps certain type of memory.
I fail to see how this study could tell us something meaningful regarding our ability to take information from our senses and store it into our memory. How could we understand more about how this process works? Are we that far away from understanding the inner workings of the brain that we need to do studies like this? If so, then I tend to believe that we are far from the singularity.
"My larger point is that lots of biological science is a collection of edge cases. We know so little about the systems we're studying and have such crude tools to investigate them, that we get blindsided by things sitting in plain sight all the time."
I understand. From my previous example, I would probably come up with edge cases too if I was trying to look at binary code for the first time and trying to figure out how a complex computer system works. Perhaps I would poke the system and monitor which part of the file ended up with more 1's and 0's.
"I'll do a little play on words here. Are you saying that biology isn't logical? Does it defy the laws of the universe, physics, and the axioms in math?"
No, he's saying that biology isn't human-designed. Human designs have several characteristics that stem from our limited cognition and limited ability to hold things in our heads at one time. Our designs tend to be highly modular, with distinct parts interacting in distinct ways, and with the failure of one part generally capable of taking the whole system down. There will generally be some clear separation of layers, with each layer having a distinct and clear responsibility. There is generally one way to accomplish something, or failing that, some very small set of ways. Our designs have to be this way, we can't deal with unabstracted systems, even at relatively small sizes.
To the extent that you immediately think of some piece of software that violates all of this, you also will notice that software is also pretty much end-of-lifed. We can't build in any other way for very long.
Biology doesn't work that way. Yes, there are parts you can identify, but they aren't like human-designed parts, either. They freely run all the way across the "abstraction hierarchy", such as it is. If evolution made it so that this one function that you think ought to be done by the kidneys is actually done by blood vessels, so be it. Even drawing lines around "functions" is quite difficult when you get down to it, the deeper you go and the more precise you try to make the line, the more of the body you end up implicating. It's all just jumbled together, with redundant pathways for everything.
We know about the "functions" of things that we know about precisely because we are looking for them in the first place. We have a cognitive bias (in the machine learning sense of a bias induced by what we are capable of even expressing in our heads in the first place, not the usual English sense of bias) for these things, so that's what we find. But whereas in human designs these functions are real (we made them that way), in biology they are only approximations. To the extent you look at a system in the human body and see something with clear, well-defined parts, that's because you can't really perceive the full complexity of what's actually there, not because it isn't complex and quite tangled.
And I gotta tell you, having my brain scanned and converted into an approximation of its original state that will use an approximation of how brains work is a bit of an intimidating prospect.
Rather than the term 'logical' I would think of 'reasoned', human made systems in general tend to be designed through reasoned, deterministic processes. This means that from any point in the system, no matter how large or complex, you should be able to logically determine what the adjacent points in the system are, from an endpoint you can backtrack your way to the origin.
Biological systems are 'designed' through stochastic processes, which means that from any point in the system we can only make a probabilistic guess as to what points are adjacent to it. This works well for solving small problems, like predicting the next word in a sentence (language of couse being a natural, not human designed process), but this requires us to have a large corpus of example data, and doesn't scale well to trying to extract an entire book from one word.
So trying to fully understand large biological/natural systems is much harder than trying to decode an equally complex human created system.
Since I can tell this topic is interesting to you, if you haven't read it, I highly recommend "Brain Rules" by John Medina: http://www.amazon.com/dp/0979777747/
It's a fantastic read that I think would do a much better job of discussing this topic, especially in the realm of human cognition than I can.
That has built up only by randomly merging branches, occasionally interpreting /dev/urandom/ as a patch, and keeping the branches that pass the most tests, oh, and the test suite also occasionally interprets /dev/urandom/ as a patch.
This is a good analogy. Unfortunately however things could be even worse. At least in computing their is some logical design to be discovered. Biology isn't constrained by this. It might well be layer after layer of spaghetti code.
If our DNA was designed by the same process, it almost certainly looks the same way. There should be no abstraction beyond the strictly functional. If it's comprehensible at all, that would be a miracle.
The brain might not have a "design", but its physical structure seems mostly hierarchical. Even "layer after layer of spaghetti code" has abstraction (layers), no matter how leaky.
A frightening analogy might be the Win32 API. So many applications rely on the bugs and side effects of the Win32 API's internal implementation that those behaviors become part of the implicit API contract that Microsoft must maintain.
He would be correct if creation of AI depended on a thorough understanding of neuroscience. But I hope we needn't wait that long.
It's the old "Birds fly. To fly, man must fully understand bird flight." argument. Yet today we still don't completely understand bird flight but planes _do_ fly.
The analogy is not complete: we have yet to find the "air", the "turbulence", a "Bernoulli principle", etc. of intelligence. That is to be determined. But this approach is the only reasonable one.
As the author implies, waiting for neuroscience is like waiting for Godot.
Exactly, we've had airplanes for a century but working orthnocopters are still something of a black art. Like so many things in engineering, it is easier, and better, to not pay naturally occurring phenomenon undue attention. We are capable of engineering better.
The lack of AI progress in the last 30 years is not a good sign. You're also ignoring things like the materials that have come out of studying the pads on geckos' feet.
There has not been a lack of AI progress in the past 30 years, but rather some sort of 'True Scottsman'-esque raised expectations phenomenon. Natural language processing, computer vision, etc have seen dramatic improvements in recent years but every time there is an improvement people say "well that's just standard stuff, not real AI". As long as a technology is real, people seem weirdly unwilling to accept that it is also an example of AI. The "real stuff" seems to include "fictional" as an integral part of it's definition.
I'm specifically talking about general AI. I don't think we're any closer to that than when the problem was first posed. Although I'm willing to change my mind if you make a good argument for it.
I guess that's a matter of definition. If humans are the only example of "intelligence" that we know of, then it seems natural that artificial intelligence would concern emulating humans.
I'm sure there are a number of bots that are able to fool people a lot of the time and many have been around for a long time, but that isn't what people think of when you say AI.
Exactly, and he's talking about the human practice of neuroscience. When we manage to build a sufficiently advanced AI we can set it to work on these types of problems, that's what's exciting to me.
Not interested in arguing about his time table, but the example of DNA sequencing only affording a linear increase in undersanding is bogus and he ought to know that. It has significantly accelerated genetics research by making mapping a matter of a browser search. As an example, the fly lines developed by Gerry Rubin et al, which can be manipulated to express any reporter gene in any genetically defined brain locus. That would have been completely infeasible prior to complete genomic sequencing of the fly.
The OP asks reasonable technical questions about medical nanorobots. I'm not going to defend Kurzweil, but some less-sloppy thinkers have written about this kind of stuff, like Merkle, Freitas, and Drexler. E.g. http://www.merkle.com/cryo/techFeas.htmlhttp://www.nanomedicine.com/NMIIA/15.3.6.5.htm
They do tackle questions like how do you power these things; I wish he'd read and criticize them instead.
A 7-micron-long medical nanorobot sounds pretty damned big to me, btw -- in _Nanosystems_ Drexler fits a 32-bit CPU in a 400nm cube, less than 1/300 of the volume if we're talking about a 1-micron-radius cylinder.
This article was a very similar one to ones that biologists were publishing in mid 80's when Kurzweil predicted the mapping of the human genome within 15 years. It's interesting how exponential progress is counter-intuitive even for those who have been experiencing it in their fields for years.
I always thought that was Kurzweil's main and strongest point. We tend to predict linearly when a lot of progress appears to happen exponentially. The rest are embellishments.
The main call to action should be how to protect, organize, and invest in ourselves given possible developments from the above.
My big problem with Kurzweil's singularity is the massive handwaving he does between 'computers are getting exponentially faster' and 'AI will arise'.
This depends on the assumption that 'intelligence' (and nobody can really agree on what that means, which is a bad start) is representable in algorithmic form. Maybe it is, maybe it isn't, but the lack of progress in hard AI in the last 30 years isn't a good sign.
There's never any progress in AI, because once we figure out how to do something, we stop calling it "AI".
In the last 30 years, computers have won at chess, won at Jeopardy, learned to recognize spam with better than 99.5% accuracy, learned to recognize faces with better than 95% accuracy, achieved semi-readable automatic translation, figured out what movies I should add to my Netflix queue, and started to recognize speech. We've seen huge advances in computer vision and statistical natural language processing, and we're seeing a renaissance in machine learning. Most of this stuff was considered "hard AI" as recently as 1992, but the goalposts have moved.
And if intelligence can't be represented in algorithmic form, then what's the brain doing? Even if we have immaterial souls that don't obey the laws of physics, why do some brain lesions cause weirdly specific impairments to our thought process? A huge chunk of our intelligence is clearly subject to the laws of physics, and therefore can be wedged somewhere into the computational complexity hierarchy.
Well, maybe you went to different classes than I did, but none of those things you mentioned would ever have been considered hard AI as I was taught it.
Hard AI == generalised problem solving (or as wikipedia puts it, 'the intelligence of a machine that can successfully perform any intellectual task that a human being can'), soft/weak AI == problem solving within a specific defined area (such as all of your examples).
Not really. The idea behind hard AI is that you can have one algorithm that can solve general problems, not lots of different algorithms to solve specific problems.
Take IBM's research as an example. Deep Blue is very good at solving chess problems, Watson is very good at solving textual analysis problems, but taken together they don't solve any problems that aren't chess or textual analysis.
The hard AI problem is a direct parallel of the psychological debate over whether a general intelligence capability (the 'G factor') exists in humans, and whether this can be measured (IQ).
Honest question here - does anyone actually still believe a single algorithm can solve general problems? Certainly the human mind doesn't work that way - we are fairly compartmentalized with a lot of communication between compartments.
AIXI is a single algorithm that can solve every problem, given enough time, which is enough to prove that they do exist, however uselessly long it would take AIXI to solve any actual problem.
The book "On Intelligence" does a decent job explaining for non-neuoroscientists a theory where all parts of the brain (imaging, hearing, speach, cognition, etc) use the same pattern matching algorithm.
Half of your examples are, in the grand scheme of things, trivial applications of decades-old techniques. Recommendation, for example, is based on techniques which are almost 100 years old (SVD). Winning chess has been rendered possible through increased machine power, the algorithms necessary for it are hardly ground breaking. If this is AI, then many things are AI: almost any quantitative usage of statistics is AI, for example.
There is no agreement on this, but some leading AI researchers consider that most AI problems should be solvable with decades old computers (i.e. we need some completely new paradigms that nobody has yet thought about). See Mc Carthy for example: http://www-formal.stanford.edu/jmc/whatisai/node1.html
That was exactly his point, wasn't it? His examples used to be considered problems in AI. Now that the problems are understood, they are simply considered to be algorithmic problems. Lots of output of the MIT AI labs in the early days is now simply taught in algorithms courses, yet back then, those researchers considered themselves to be doing AI research.
As I recall, Kurzweil referred to such applications as "narrow A.I." Human-level AI capable of passing the Turing Test is called "strong A.I." in his lexicon.
And because we already have proof of concept machines that perfectly exhibit intelligence, we can be sure that the intelligence spot in the complexity hierarchy isn't somewhere in the intractable regions.
Go and Chess are fundamentally in the same class- perfect information and deterministic. Go simply has a much higher branching factor than Chess, limiting the utility of pure brute-force techniques and increasing the importance of good pruning and heuristics. It is no more surprising that Go takes more effort from programmers than Chess than that Chess is trickier than checkers.
There's probably more to it than that. Humans do typically play Go on 19x19 boards, but Go is still relatively hard for computers (compared to Chess) even on smaller boards like 9x9, where the branching factor is reasonably comparable to Chess. And the branching factor in the kind of Go tactics problems collected as exercises in books for human players isn't that much larger than the branching factor in chess, either. Decades ago chess programmers could remark that their programs solved typical chess tactics problems faster than humans could turn the pages in the book. Even today solving typical Go tactics problems is somewhere between a very serious programming challenge and an open research question.
Broadly speaking the difficulties in Go tactics seem to be similar to the difficulties that computers have historically had in Chess endgame tactics, which clearly wasn't a problem of branching factor: branching goes down in the Chess endgame. (However, today there is an important dissimilarity: the space of Chess endgames is small enough that modern computers can tabulate the solution for a significant fraction of the problem space beforehand.)
This depends on the assumption that 'intelligence' (and nobody can really agree on what that means, which is a bad start) is representable in algorithmic form.
We have a working definition and know that it is algorithmically representable nowadays (http://www.hutter1.net/ai/aixigentle.htm). Now the question is how you can make the algorithm efficiently computable.
The whole framework is extremely abstract. Unless human brains are doing magic that's beyond what a Turing machine can do, whatever humans are doing probably does boil down to something resembling the Solomonoff induction AIXI is based on. It's also pretty trivial to observe that humans are running some kind of very clever approximation instead of the full abstract AIXI, since we can actually do useful stuff with the amount of sensory information we get in reasonable time.
What humans do belongs to the part where you need to figure out how to make this stuff efficiently computable.
Well, that's the point. Take away the weasel word 'magic', and saying that humans are doing what a Turing machine can do is the same thing as saying they're algorithmically representable.
Turing machines are not handed down to us by God; there is no reason to believe they are some kind of Ultimate Representation of Everything.
it isn't just that you can never be sure. you can also never be sure it is false. for anything you find that is potentially uncomputable you never know if you simply haven't figured out how to compute it yet.
Doesn't contemporary cognitive science pretty much go with the hypothesis that human brains are Turing-equivalent? Unless we go with magic, hypercomputing brains would require hypercomputing physics, and so far we know pretty much about physics and all of it seems to be Turing-computable (although slowly with quantum stuff), outside pathologies like time travel.
We don't know if Turing-equivalent formalisms are the Ultimate Representation of Everything, but they seem to be by far the best Representation of Everything anyone has found so far.
>We don't know if Turing-equivalent formalisms are the Ultimate Representation of Everything, but they seem to be by far the best Representation of Everything anyone has found so far.
Well, sure... and Wolfram's 2-state-3-symbol means that an incredibly tiny machine with no memory at all is basically equivalent to any computer we've ever designed. However, that doesn't provide us any insight into how thinking works, or how to write programs, or anything really. It's a mathematical curiosity. In reality, Turing machines themselves are a terrible representation of algorithms, since despite being able to represent anything, anyone who tries to think of writing code in terms of developing simple rules for writing numbers on a strip is going to lose their mind.
Aside from the obvious problem (AIXI is undecidable), there's no real reason to believe that it represents a useful way to analyze the problem of intelligence. For one thing, no progress has really been made on the Hutter prize since its inception -- prediction by partial matching continues to win, and it was developed in the '80s.
even at a low evel the brain is compressing and throwing out such massive amounts of data that I don't think it's fair to call it solomonoff induction anymore.
Nice to see a post on this topic from a neuroscientist, as I am very interested in this area but know little biology.
One question though--the author says "while the fundamental insights that have emerged to date from the human genome sequence have been important, they have been far from evelatory." While not guaranteed, doesn't it seem likely that we will understand much, much more about the human genome once the economies of scale come into play? The price of sequencing a genome is currently on the order
of about $10000, and if they continue to fall at the rate they have (which seems likely, based both past price decay and in-development technologies), the cost to sequence a genome will be on the order of $100 well before the end of this decade. Once we sequence millions-billions of genomes and compare the information in said genomes with data from the corresponding human subjects, I suspect we will learn a lot more than we would by trying to understand a single
person's genome. Moreover, given that the human genome is on the order of roughly a gigabyte, it would seem difficult, but not unreasonably so, to try and understand most the information in our DNA.
I've never been impressed by the "simulate a single human" approach to AGI.
I don't know why it appeals to people. Has Christianity infected people with a desire for personal immortality? Are people inured to flushing billions and billions down the drain on biomedical research?
Another issue is that humans aren't that great anyway. The "game of life" is really about statistical inference and people aren't that good at it -- the success of Las Vegas proves it. If you can eliminate the systematic biases that people make dealing with uncertainty, you can make intelligence which is qualitatively superhuman, not just quantitatively superhuman.
It's much more believable that steady progress will be made on emulating and surpassing human faculties. This won't be based on any one particular methodology (symbol processing, neural nets, Bayesian networks) but will be based on picking and choosing what works. Progress is going to be steady here because progress means better systems each step of the way.
Sure, the Richard Dreyfuses will be with us each step of the way and will diminish our accomplishments... and they might still be doing so long after we're living in a zoo.
> The "game of life" is really about statistical inference and people aren't that good at it -- the success of Las Vegas proves it.
Las Vegas is run by human beings. Having members of your species be worse at a task than other members of your species doesn't prove that your species as a whole is not good at the task.
Now...
> Has Christianity infected people with a desire for personal immortality? Are people inured to flushing billions and billions down the drain on biomedical research?
Why wouldn't one want personal immortality? To be fair, religious groups are those least likely to support personal immortality of many sorts (e.g. brain uploading) because of questions such as "what happens to the soul", and "isn't this meddling in our creators work?".
Rather, I'd think that anyone who believes that this life is all that exists would want to prolong it indefinitely. It's better to be than not to be.
Certainly: quality of life. Immortality may be a miserable, horrible existence. Most of the things we derive happiness from in life are an integral part of our normal life-death cycle of events, disrupting that cycle may not be very pleasant long-term unless we can supplant those innate desires with something equally fulfilling. We don't really know, it's never happened.
Most of your desires and goals are related to the scarcity of time. In a broad sense, having children for example is a rewarding experience for people, we have an innate desire to reproduce and rear children (talk to any childless post-30 woman who didn't explicitly choose that situation, you'll see it's innate), but it's also something we probably wouldn't continue to do should death be removed from humanity, there'd be no need. But in a smaller sense, even something like enjoying a nice meal with friends would be called into question. Your enjoyment of food only exist because we've got a biologically programmed taste for foods that assists in our survival, and social interaction in an infinite time scale even gets weird, everyone would eventually know everyone and know everything there is to know, unique life experiences would become commonality, etc.
there's a lot of assumptions in that particular view of immortality. It assumes that it would also mean infinite memory, a loss of senses and certain capabilities, and that we cannot modify ourselves to create new urges. All of these are possible and plausible, but for example in the simulated AI situation only the loss of senses is likely to be true (as well as perhaps the inability to modify ourselves, depending on the approach)
The desire for personal immortality is the oldest known human desire, it's written down in the oldest story we know http://en.wikipedia.org/wiki/Gilgamesh, or still standing in the pyramids and similar burial places all over the world.
Singularities have happened in the past when life evolves a solution to a local problem such as photosynthesis, the social primate and agriculture.
Kurzweils singularity is just one of many potential signularities but the near future seems either contain major innovation with great generality or collapse.
The likelihood of Kurzweil's particular vision of the singularity in this case doesn't say anything about the likelihood of the singularity in general, i.e. by the creation of artificial intelligence through methods that are nearer at hand than nanobots or whole brain emulation.
For me this complexity problem could insurmountable. I think the best approach may be to side step this issue and try selective breeding of virtual (increasingly) intelligent beings.
Before knocking Kurzweil's predictions, review his predictions of the 1990's and the people who mocked them. Kurzweil does not have a perfect track record. I think his accuracy in predicting the future is way above average.
Also, I find his views of the future enlightening and useful, as he illustrates lots of "just out of reach" engineering projects for me to consider tackling.
Between the years of 1990 and 2005, Kurzweil predicted the following:
* People will mainly use portable computers.
* Portable computers will be lighter and easier to transport.
* Internet access will be available almost everywhere.
* Device cables will disappear.
* Documents will have embedded moving images and sounds.
* Virtual long distance learning will be commonplace.
Those predictions are a lot less impressive if you read the text of them rather than a short summary written after 2009 came to pass. The prediction about portable computers for 2009 that he actually made back in 1999 (in the book The Age of Spiritual Machines) reads:
"Personal computers with high-resolution visual displays come in a range of sizes, from those small enough to be embedded in clothing and jewelry up to the size of a thin book."
People don't really use wearable computers now, at least not powerful enough to drive high-resolution displays. And the desktop is still with us, as are laptops in the same large form factors that were all that was available in 1990.
Looking at his other predictions for 2009, I'm not sure its fair to say that cables are disapearing since you still need them for the highest speed connections, but the statement is ambiguous enough that I'll give him that one.
"The Majority of text is created using continuous speech recognition" is totally false.
"Most routine business transactions take place between a human and a virtual personality. Often the virtual personality includes an animated visual presence that looks like a human face." No.
"Intelligent courseware has emerged as a common means of learning". Sure, but I'd already done this back in '99 so it wasn't a hard prediction.
"Pocket sized reading machines for the blind" AFAIK these exist.
"Translating telephones are commonly used for many language pairs" well, its in development. I imagine it'll be common by 2015.
"Widespread deflation" well, both the US and Japan have had bouts of deflation recently, that wasn't due to technology but rather central bank decisions.
"The neo-Ludite movement is growing" its less of a problem than in 1999, as far as I can tell.
If you look at all his predictions rather than cherry picking the best ones and rewriting the rest to sound better he doesn't come out so well.
He's not 'mocking' the predictions, he's providing a compelling, well thought out argument as a counter to a prediction, particularly interesting due to his neuro background.
Well lets see, I have high hopes for brute force computation approaches to scientific research. More IBM Watson-like systems, which teach themselves, and more full automation of complex scientific experiments. Industrialize, automate and computerize science, in other words.
If we do that, we have a chance of increasing the rate of scientific discovery. And it does not require true God like AI, just a bunch of really clever and domain specific Watsons.
Alternatively, the future looks much like today, except with a lot more and better gadgets, but people still grow old and die.
Also we've either started massive C02 sequestering actions, like fertilizing the oceans with iron, and using tons of charcoal in our farms, or nuclear power provides a much larger % of global power, or Siberia is balmy.
And lastly the US and several other industrialized nations have gone through a terrible economic/financial crisis and reform like the UK did in the 1970s and Sweden at the begging of the 19th century.
And China is the world's super power, with India close behind, and no one cares much about the EU (or what's left of it) and the US.
A lot of things about today weren't obvious in 1990, but they're not on that list. These would have been amazing predictions then:
1. Searching the internet will become perhaps the most profitable and one of the most important businesses in all of technology.
2. Social networking will change the way people interact with each other but "virtual reality" will not catch on. People will end up preferring a loose representation of socialization.
3. Shortened attention span with regard to communication, rise of blogs and then tweets and text messages cutting into and even in some cases replacing traditional media.
I would agree he's certainly right about Kurzweil's unrealistic optimism, but I'm not sure our understanding of the brain (and other aspects of our biology for that matter) isn't increasing exponentially. Perhaps rather it just seems linear compared to the turbo-charged progress of these enabling technologies? Certainly we've come a lot further since Phineas Gage than a linear trajectory would allow.
He should have thrown around some numbers while he was at it. I wonder if he'd agree with clinical immortality by the end of this century, and mind-uploading by the end of the next?
I think you might be missing the point, though. The argument is that we're collecting exponentially more data about the brain, but that data doesn't translate directly to understanding.
You mentioned Phineas Gage. That case led to the idea of regions of the brain controlling different things, which led to lobotomy as a psychiatric treatment, which was used up until the 1960s or so. Then chemical methods improved, and people came to understand that neurotransmitters played a role too, which led to antidepressants and other drugs. Those drugs have improved, but their design hasn't changed that much in the last few decades. Obviously this is over-simplified -- but it doesn't sound like an exponential growth of understanding to me.
At the end, we can't talk about exponential or linear growth. We have no 'mesure' for scientific advance. and we need mesure for mesuring something. Or maybe somes exist but I'm not... okay so I googled it before posting some stupid stuff: It seems that there's no 'real' mesure.. scientific paper to prove it: http://www.springerlink.com/content/m1h2150x02u153x3/
There's something rather disingenuous about your phrase: "not sure our understanding of the brain isn't increasing exponentially".
I think the claim that our knowledge (_actual_ knowledge, mind you) of anything is increasing exponentially (with the usual implication that the exponent isn't 1.01 per decade :-) ) is the claim that requires proof.
I suspect a similar thing is playing out in neuroscience. As we peel back the layers of the onion, ever more complexity will be revealed. The things Ray Kurzweil predicts may well come true. He is a brilliant guy. But the timetable is very optimistic.
The march of biological progress is very slow, in part because all the experimentation involves living things that grow, die, get contaminated, run away, don't show up for appointments, get high, etc... Lots of people from other scientific disciplines, especially engineering related ones underestimate just how long even the simplest biological experiments can take.