Ray is systematically restating his opinions in much more vague manner than his original publication of the Singularity is Near.
Originally:
"By the end of this decade, computers will disappear as distinct physical objects, with displays built in our eyeglasses, and electronics woven in our clothing, providing full-immersion visual virtual reality."
Now (from the PDF):
"Personal computers are available in a wide range of sizes and shapes"..."the prediction does not say that all
computers would be small devices integrated in these ways, just that this would be common, which is indeed the case"
Sorry Ray, I'm not convinced.
I believe Ray completely ignores economics in his predictions. Things such as Moore's law appear when there is always constant, high economic demand. There is always constant, high economic demand for faster computers due to the intrinsic nature of how computers are used. There is NOT constant, high demand for new consumer technologies that haven't been developed and extensively marketed yet.
A corrollary is tablet computers. Development and research into touch interfaces did not increase exponentially after it was developed. Rather, you had to wait 9 years for an Apple to come along to do the hardwork of changing consumer preference to create the economic demand for this improved human-computer interface.
Ray usually seems to assume that an AI is in charge of the economic demand leading to its creation before it is created. He ignores the "stickiness" inherent in consumer demand. Perhaps there is a place for the post-service economy techno-marxist cyber utopia he envisions, but I don't see it occurring before capitalism has run its course globally, which will still take some time.
Exactly, tons of different technologies are POSSIBLE. But it's a matter of what's economically feasible (what consumers will pay for, etc.)
We could have had Web TV in 1995 if it was really economically feasible... but no on figured out the business model. And it looks like it still hasn't been figured out, although Google is trying again with Google TV.
That said, I think Kurzweil, while grading himself overgenerously, did a pretty good job, and no one else is making these kinds of predictions. I actually did the same thing back in 2009, since I am a Kurzweil fan. It's a little interesting how many things came true in 2010 -- he cheated by a year but it helped him a lot!
First, my compliments to any prophet who goes back to actually audit his predictions.
Having said that, I think Ray is grading himself on a pretty generous curve. Many predictions of the form "X will be Y" are counted as correct if somewhere in the world at some cost there is technology available allowing X to be Y.
As a neutral reader, those predictions sure sound like they are supposedly to be widespread rather than merely existing.
If you are an expert in world jugglery you can even make Astrology seem true.
The trick is to use that fact that English words aren't clearly defined. That way in the future you can clarify what you meant :)
For example, Kurzweil said we would have a 20 petaflops super computer by 2009. Right now the fastest super computer is Tianhe-1 and it is capable of 2.5 petaflops. And to that Kurzweil responded that he considers Google to be a giant supercomputer, thanks for telling us after the fact what you meant by super-computer =/
> PREDICTION: Personal computers are available in a wide range of sizes and shapes, and are commonly embedded in clothing and jewelry such as wristwatches, rings, earrings and other body ornaments
> ACCURACY: Correct
> PREDICTION: The majority of text is created using continuous speech
recognition (CSR) dictation software, but keyboards are still used.
CSR is very accurate, far more so than the human
transcriptionists who were used up until a few years ago.
'With the advent of multi-core architectures, these devices are starting to have 2, 4, 8... computers each in them, so we'll exceed a dozen computers "on or around their bodies" very soon.'
Calling multi core architectures multiple computers is... interesting. I don't think there's much point in the document if he's going to give himself the most generous possible interpretation, wildly alternating between the letter and spirit of what he wrote. And I say that as someone who vaguely agrees with him.
He is being very, very liberal with his own analysis.
If you read the predictions in context, you know that he predicted a future world where we are all wearing computers that we talk to. When he pulls each prediction out, he is able to justify it with a bit of a stretch.
btw he completely missed cloud computing - his predictions have users storing everything at home on large disk arrays.
(not sure how he missed it Ellison has been banging on about it for decades)
> 4. Communication | Telephone communication has moving images
> PREDICTION: ...and routinely includes high-resolution moving images.
> ACCURACY: Correct
...and then he goes on to mention FaceTime and mobile video calls in the US in 2010, and calls it a success.
Meanwhile, in Europe, we've had this since 2003, all phones you buy now have the capability, but almost no-one is using it, because it is essentially an undesireable mode of communication. Voice-only allows the speaker to multi-task, to do something with hands and eyes, while talking to someone else. Video-conferencing has its uses, but it's not lack of technology that holds it back, it's the undesireability of it.
We regurlarly do Skype video calls with the grand parents of our daughter. Video telephony has been completely mismarketed and wrongly priced.
It's a technology with high emotional bandwidth. This means we want to use it for longer and more intimate calls. It should therefore be priced as low as possible. When it was first launched in Europe it was marketed as a premium version of voice calls and thus much more expensive. This was wrong.
Compare this with SMS which has a very low emotional bandwidth. This type of communication is much more rapid and we accept a higher price per byte.
Now that I've read through all of his predictions, this is part of a larger trend among them. He calls his predictions a success when the technology problem has been solved, and he is completely ignoring that forces that decide if that technology becomes common or not. He is very bad at thinking about the human factors, and optimistically calls success for things that saw a brief life in the market, and then was pushed to the sidelines.
Take force-feedback in games. Ten years ago I could go into any videogame store and buy a lot more different kinds of force-feedback controllers than I can today. It was tried, and then everyone got bored.
Or his prediction that everyone will have several wearable computer devices on them. Yes, we can do that today, but noone is doing it, because the more devices you have, the more often you have to recharge them. This is why most people consolidate their wearables to one smartphone that they charge every night, rather than ten devices that have to be charged each on their individual schedule.
Or voice-recognition input. Yes, the technology is there, but is is not, and will probably never be, the main input mode for our computers, because it is incredibly annoying in any group of people.
Or his prediction that cables are disappearing. This prediction will not fully come true until we solve the recharging problem. I have a colleague who recently bought an ordinary wired mouse to replace his cordless mouse, and he said it's the best thing he bought for his computer in ten years, because it always works and never runs out of power.
Or his prediction about animated virtual personalities. Yes, I've seen those at a bunch of websites, but less and less these days as people realize that they're just in the way. What people want is an efficient search function where they can type whatever they want and get relevant results.
He also has a bunch of predictions involving virtual reality which are all pretty off. World of Warcraft has over ten million active players, and allows people to interact with each other in the virtual world of Azeroth, and yet a lot of people play it in internet cafés or at LAN parties, they play it with their friends next to them, because the physical experience is so much better than the virtual one.
It's also pretty telling that if you look at professional video game players, they are constantly rejecting reality-like immersion. The best first-person-shooter players are all using the lowest graphical settings, and a distorted perspective, the best world of warcraft players do the same, play with no sound, and have lots of addons that expose as much of the underlying mechanics and numbers as possible, which is completely contrary to the goal of immersive systems.
Not even Steve Jobs can make accurate predictions about what will/won't work without usability testing. Futurists should make distinctions about capabilities and how they are implemented.
Some thing just stop in development, waiting for a break through to be better than the alternatives. Virtual reality, chording one handed keyboards and screens in eye glasses seems to be examples. I've been waiting a long time for a nice Steve Mann-system to run Emacs.
As someone said... It's hard to make predictions - especially about the future.
I wouldn't hold my breath for virtual reality, however augmented reality came out of nowhere and is looking like it's going to be very important. I saw that iphone app, Word Lens the other day, it's absolutely fantastic. It transforms your iphone into a magic looking-glass, I think the possibilities for technologies in that direction are endless. If/when we get good retina displays, this is what they're going to be used for, to change our view of reality, to add things to it that helps us in our lives.
Check what Steve Mann, which I refenced, did re augmented reality. Really cool, long ago. (His work with cameras and different exposures to handle dark/light picture parts has gotten analogies in modern cameras the last few years.)
Saying it has been widely available in Europe sine 2003 is a bit of hyperbole. Far from every phone had the ability and the UI always sucked. You could but it was damn hard.
I guess if you write technology predictions like people write horoscopes, and then interpret them years later like people interpret horoscopes, you can call yourself a brilliant futurist.
I think he grades page 59, #10 too generously (You can see where an hour of my early morning has gone)
"PREDICTION: keys for encrypted communication are
available to government authorities."
Kurzwell gives this a "Correct" - I would suggest that is "Essentially Correct", if not "Partially Correct" - My guess is that he thought Clipper Chip (or some other key escrow variant) was going to take off (he wrote this in the 90s) and that is clearly not the case. Key escrow never went anywhere.
It's not clear to me that the various government have the encryption keys to most encrypted communication - in particular, my understanding with BES (Kurzwell uses RIM key access as a justification of his "Correct") is that encryption is done on the Company's server, and decrypted on the end device. No government access to the keys.
Kurzwell goes on to talk about the "additional layer of virtually unbreakable encryption codes with no third party keys." in #11, but the two predictions, #10 and #11 are somewhat at odds to each other. He's trying to have his cake and eat it as well. "Almost everything is encrypted with keys that are available to the government except for that data which is encrypted with keys not available to the government". He was looking at the trends in clipper chip and made a partially incorrect prediction. He should just admit it - no shame in missing this one - he's still "Essentially correct"
"Conversely, how much would a circa 1999 laptop be worth in 2009? It might actually
have some value as a museum piece, but it would be essentially worthless as a compute" - p.70
Not sure about a 1999 laptop, but there is a 2001 desktop PC in our living room that is used daily. I'm typing this on a late 2009 netbook which is arguably less usable than a 2001 desktop (cramped keyboard, small screen, generally feels more sluggish). Funnily enough, both run Windows XP (at least my netbook did originally until I upgraded the OS.) So technology doesn't always change as fast as one might anticipate, or become obsolete quite as fast either.
As I was reading this, I kept thinking, "Ray is really lucky Steve Jobs exists." A large chunk of those predictions seem to rely on the iPad, iPhone, and iPod Nano having existed and being popular to be marked correct.
I don't think Steve Jobs can take credit for MP3 players existing. If people weren't using Ipods and Iphones, they would be using Diamond Rios and Blackberrys that were pretty similar.
I'm not talking about MP3 players really, I'm talking about all of the references to touch displays, wearable computers, solid-state drives, high fidelity ebooks, sales of digital media, etc. Apple didn't really invent these things, but they certainly played a big role in making them ubiquitous.
Page 81 - #15,
PREDICTION: There is a strong trend towards
the geographic separation of work groups. People
are successfully working together despite living
and working in different places.
Kurzwell gives this a correct, I'd give this an "Essentially Correct" - for whatever reason, the most successful teams seem to work together geographically still. There is some movement towards separating things like QA, Manufacturing, Packaging to separate geographic regions - but for some reason, teams still tend to work better together.
The exception to this would be the myriad Open Source software projects in which the teams never come together - but, for each one of those, I can point to an Agile development team which harnesses the "No Cube / Open Workplace" environment (Facebook, Zynga, Atlassian) to accelerate their work.
I'd say "Essentially Correct" - Kurzwell should note that there continues to be value to the Geographic collocation of teams.
A lot of people are making him out to be too optimistic about his predictions. I agree to an extent, but no one seems to mention that the vast majority are remarkably accurate and only obvious in retrospect, hindsight bias can colour the many right ones as being trivial but I found many astonishingly clairvoyant.
Interesting - The paper is dated "October, 2010", but on page 10, he talks about the "$4.99 Word Lens iPhone App". Just how clairvoyant is Ray Kurzwell anyways... :-)
It looks like the paper was (and maybe still is) under constant revision, which isn't reflected in the publishing date. For example, on page 7 he mentions TSV 3D DDR3 memory by Samsung, which was announced on December,7 2010 (http://www.samsung.com/global/business/semiconductor/newsVie...
), and it says December in the paper.
Though I'm really happy to discover that Kurzweil is keeping in touch with the latest developments in technology, I'm alarmed by the fact that he seems to be eager to cut corners to put his agenda. This particular memory is just announced in December, and it should be available in 2012, not now. Kurzweil is supposed to be talking about the situation in 2010, isn't he? And, while defending himself, he says that he means "commonly used", not "ubiquitous". Maybe it's because English isn't native language to me, but I thought it is more or less the same.
It is a gradual difference: 'Commonly used' is when you are no longer surprised to see someone use a new technology, 'ubiquitous' (literally: everywhere-ish) is when you would be surprised seeing someone still using an older technology it replaced.
I have only a vague idea of the concept of the Singularity (and it is that artifical intelligence will one day eclipse human intelligence), but does anyone know if Dr. Kurzweil & Co. have considered the possibility that by that time, genetic engineering will almost certainly have given humans the ability to boost their children's intelligence to astronomic levels? An Einstein in every household, literally. Thus, the Singularity prediction as it stands would be one that assumes that human intelligence is stationary.
In fact, it may well be the case that the first AI more intelligent than current human intelligence is crafted only because that generation of humans are mega-intelligent compared to us and it is child's play to them to craft such wily code! :-) Thus the stationary version of the Singularity prediction will come true, but in relative terms, organic intelligence may still be ahead of machine intelligence (in ways that dumb ol me can only imagine).
But the question that bugs me personally is : will we evolve enough along the moral, ethical, spiritual, and wisdom axes to survive that long? :-(
I am most interested in the set of genes that make each person more ethical and sympathetic to the other's POV. The Golden Rule Gene.
The "transhumanist" movement advocates that in order to keep up with technological progress in the future (and stay ahead of a singularity) we'll have to augment our brains with cybernetics. And when that happens the lines between human and machine start to blur. I'm convinced that this future, good or bad, may be inevitable.
I'm at work, so I haven't been able to read the article thoroughly (yet I still have time for HN).
Could anyone who has read it comprehensively point out if Kurzweil talks about the recent discovery that one human brain contains more switches than the entire world's computers. Maybe that was a discovery blown out of proportion, but the finding that human brain synapses have more than two states seriously impacts the trajectory of the singularity. No?
For those who aren't familiar with Kurzweil's thesis and don't want to read his books, this video of a presentation he gave at MIT is a good introduction:
The summary he gives at the beginning of this paper is actually pretty good, too, if you want a really condensed version:
My core thesis, which I call the ―law of accelerating returns,‖ is that fundamental measures of information technology follow predictable and exponential trajectories, belying the conventional wisdom that ―you can‘t predict the future.‖ There are still many things — which project, company or technical standard will prevail in the marketplace, or when peace will come to the Middle East — that remain unpredictable, but the underlying price/performance and capacity of information is nonetheless remarkably predictable. Surprisingly, these trends are unperturbed by conditions such as war or peace and prosperity or recession.
Ray is smart and very insightful. His predictions are informed and often durable. I find his self promotional tone, constantly referring to "my" theory and what "I" did, grating and difficult to take seriously. I haven't read much of Kurzweil, but I did meet Aubrey de Gray, who I find to be a similarly toned snake-oil hawking hyper-intellectual self-proclaimed messiah of this new technological religion.
I do find these guys thought provoking and informative, but how they seem to take some sort of credit for advances in technologies simply for predicting their impacts, with self-promotional gibberish, is irritating to say the least.
This article is similarly interesting, but it comes as an attempt to defend his legacy rather than add to the debate. While he makes some strong points that his critics are often ill informed, overall my respect for him has been damaged by his willingness to bend his past words. How can I trust anything he writes in the future?
Originally:
"By the end of this decade, computers will disappear as distinct physical objects, with displays built in our eyeglasses, and electronics woven in our clothing, providing full-immersion visual virtual reality."
Now (from the PDF):
"Personal computers are available in a wide range of sizes and shapes"..."the prediction does not say that all computers would be small devices integrated in these ways, just that this would be common, which is indeed the case"
Sorry Ray, I'm not convinced.
I believe Ray completely ignores economics in his predictions. Things such as Moore's law appear when there is always constant, high economic demand. There is always constant, high economic demand for faster computers due to the intrinsic nature of how computers are used. There is NOT constant, high demand for new consumer technologies that haven't been developed and extensively marketed yet.
A corrollary is tablet computers. Development and research into touch interfaces did not increase exponentially after it was developed. Rather, you had to wait 9 years for an Apple to come along to do the hardwork of changing consumer preference to create the economic demand for this improved human-computer interface.
Ray usually seems to assume that an AI is in charge of the economic demand leading to its creation before it is created. He ignores the "stickiness" inherent in consumer demand. Perhaps there is a place for the post-service economy techno-marxist cyber utopia he envisions, but I don't see it occurring before capitalism has run its course globally, which will still take some time.