> But isn't he simply refusing to accept, on an emotional level, that everyone gets older, everybody dies?
Why? Why should we accept on an "emotional level" that we are about to die? Just because it's currently "inevitable"? Seems like a cop-out to me. I think humans are meant to be better than just "accepting their fate", and that we should always try to improve our lives and conditions.
But is there really a clear moral achievement in solving death? As long as we can procreate faster than we can increase the yield of resources for allowing quality of life there will always be a potentiality of more people than we can support. Then we have the problem of figuring out how to distribute those resources in the most morally efficient way. Living longer means taking more of those resources for yourself.
The ideas of singularity are based on extrapolating from the past (progress of technology). If you extrapolate from the past human lives, eventuality of death seems pretty inevitable for many generations to come. Which extrapolation you choose probably says more about you than it does about the world.
> Which extrapolation you choose probably says more about you than it does about the world
Exactly, i.e. you can't just go and extrapolate stuff at will, lest you end up talking like that guy: http://xkcd.com/605/.
Some extrapolations are more valid than others. There are casual reasons to believe that the rate of progress will continue for some time. There are no reasons to believe that death will always be inevitable (sans the thermal death of the universe stuff, but I'm sure we'll figure something out by then).
> There are casual reasons to believe that the rate of progress will continue for some time.
What are those reasons? What if you extrapolated the speed of passenger planes in the 60s, or even 80s. Just 30 years ago. Would you be correct about the 21st century? How about space travel?
What if AI is just like that? We'll keep improving, and then it'll stall. It may later recover. Or not.
That's the problem with extrapolation, and extending current trends into the future. It's actually pretty reliable — you're often right. Until you're not.
The economic incentives for progress are still there, we're using current tools to design better tools in a self-amplifying feedback loop, and limits for this process seem still far away from us. We have tons of space for progress. [0]
> What if you extrapolated the speed of passenger planes in the 60s, or even 80s. Just 30 years ago.
Then I would be wrong, for economical, not technological reasons. We can fly at Mach 25 today, but for various (economic) reasons people generally don't want to even go supersonic.
> Would you be correct about the 21st century? How about space travel?
Well, we didn't get space travel, but we got the Internet. To be honest, the progress we made in the last 50 years is more amazing than people dreamed back then, but in a different way.
> That's the problem with extrapolation, and extending current trends into the future. It's actually pretty reliable — you're often right. Until you're not.
I agree. In general, the more detailed you try to get in your future predictions, the more likely it is for you to be wrong.
[0] - for instance, even re-reading and cross-correlating (in an automated way) all the medical papers that were published in the last 100 years would bring us tremendous new discoveries; we already have more science than humans can handle, but we also build tools that could handle it for us.
Biotechnology is an infant, like computers at the 50s. Why should we not expect huge progress from it?
Now, I guess he'll have to redo his AI extrapolations. Chip manufacturing is now mature, and while we'll probably be only slightly latter at the computers that put 70% of the population out of a job, the current smaller rate of improvement will make a lot of difference for his 2029 prevision.
Yes, but while we're hitting the limit of our current solutions, AFAIR there's a lot more room to explore with 3D chips and optoelectronics. We might yet squeeze some more progress out of it.
> Biotechnology is an infant, like computers at the 50s. Why should we not expect huge progress from it?
We should, and in this case we have a good reason to believe it - every living thing on this planet, and every little bit of what we discover about them, is an evidence that nanotechnology is possible, works, and can do amazing things. The challenge in front of us is to understand, take control and re-purpose.
I think that humans are meant to accept that they cannot and should not be able to obtain massive amounts of control over world-scale events for prolonged amount of time.
We are social species. I believe every individual should carve out his piece of history and then let others do the same. Whether they chose to continue in your tracks or not. If we are to achieve any greatness at all, we should all participate even people who are yet to be born.
What you are suggesting seems like breaking nature's cycle to me.
> I think that humans are meant to accept that they cannot and should not be able to obtain massive amounts of control over world-scale events for prolonged amount of time.
Yes, that's why we require political leaders to step down from office eventually. But the civilized way of requiring leaders to step down from office does not involve killing them.
> What you are suggesting seems like breaking nature's cycle to me.
Damn right. So was the eradication of smallpox. Nature contains many cycles that are utterly abhorrent.
>Yes, that's why we require political leaders to step down from office eventually. But the civilized way of requiring leaders to step down from office does not involve killing them.
I'm talking more in line of philosophical movements. Political leaders are not that important (although it sure seems that way to us).
In my opinion, we as a species, need to have that failsafe of time perspective on ideologies. We risk lurching in a wrong direction with no possibility of coming back on track otherwise.
No single generation should decide to faith of humanity as a whole.
You make an appeal to nature fallacy here but you also point towards a valid point. The moral dilemma of taking up resources that could satisfy needs of others, born or not yet born. Sharing resources is not a new issue though, this just makes the issue harder.
One point the article misses: Ray is a director of engineering, not the director of engineering. There are more than one engineering directors at Google.
The Boston Dynamic acquisition. Also the marriage of American intelligence services and the tech sector as the military increasingly takes to cyber space.
"Following the Boston Dynamics acquisition, Google says that it plans to honor its existing contracts, including the military contract with DARPA, but it doesn’t plan on pursuing any further military contracts after that."
I always like to remind people that the road towards immortality is going to involve a significant period in which us normals have to deal with immortal rich people. Sounds awful, like, just about the worst societal dynamic I can think of.
Don't be surprised if they realize there isn't enough room for the rest of us. These new 1% immortals may also require a special country in which they are not at risk of being tragically harmed by one of us billion mortals. Watch you don't get bit by their 2 tonne Boston Dynamics guard dog...
"I always like to remind people that effectively treating heart disease is going to involve a significant period in which us normals have to deal with long-lived rich people. Sounds awful, like, just about the worst societal dynamic I can think of."
[Back to the present]
Our age is characterized by the fact that access to types of medical technology is basically flat. Rich people get to hire better doctors, but there are no super-secret, ultra-restricted forms of medicine that are inaccessible to everyone else. Your chances of getting into clinical trials of the new new things are about as good as theirs, provided you are prepared to pick up a phone and put in the time.
I don't think that's true. To take one example, Steve Jobs famously gamed the organ donor system by buying houses in many states in order to get on multiple statewide registries. Now, most medical treatments aren't zero-sum in the way of organ transplants, but let's not pretend rich people can't buy better health than the rest of us.
Sure they can, but I think of it as rich people subsidizing the medical research for the rest of us. It trickles down, and as the progress continues, there's always another new expensive thing for rich people to pay for.
As for the organ transplants; somebody needs to pour more money into stem cell research and related fields so that we can start growing organs, thus eliminating the whole organ market (including the black one) and zero-sum dynamics of transplats.
What about the effects of over population? Predictions of food, water and clean air shortages don't seem to lend themselves to an even distribution of nano bio-technology.
Why is it awful? All new technologies initially cost tons of money. It is the rich early adopters who keep them afloat long enough so that there's time to make them cheaper and within reach of them masses, when scale advantages kick in. But if nobody buys it when it's too expensive, there might just not be a chance to develop it to the point when it's affordable.
Why is it not only bad, but worst you can think of? I, for one, can think of much worse things than immortal Bill Gates, however annoying it may be for some. Just turn on the news and watch long enough, I'm sure you'll see some of it. Then head to the library and open any 20th century history books. I guarantee you you'll find things much worse than an annoyingly long-living billionaire - such as genocide a of millions, for example. And not once but multiple times.
>>> Don't be surprised if they realize there isn't enough room for the rest of us.
Why there wouldn't be enough room for the rest of us?
It's awful because for the first time in human history there will actually be a race of people who are superior to the rest. Hasn't exactly played out great even when it's just in their imagination.
Put yourself in the mind set of a super rich and powerful immortal human being. The stakes are hard to imagine, you are trying to preserve your existence for an infinite future. People who stand in the way are not even going to have a chance, an immortal can't afford such a risk.
In what meaning it would be a "race"? Why would they be any more "superior" than now? There are a lot of things that rich people can have and poor people can't - if that is your definition of superiority, it has been happening for millennia.
>>> The stakes are hard to imagine, you are trying to preserve your existence for an infinite future
Human brain can not comprehend infinite, so it won't be much different from any regular person who's trying to stay alive - most people don't know when they'd die and don't think about it as a definite endpoint anyway, so it as well could be in infinite future.
>>> People who stand in the way are not even going to have a chance, an immortal can't afford such a risk.
Chance of what? And if the risk is so high, why would you want to alienate people that outnumber you million to one and on which your whole existence - since you can't personally produce any of the technologies needed to support your immortality, right - is based on? You'd rather try to prove them keeping you alive is a good thing and not start to murdering them in droves like you seem to imply. After all, even if medicine somehow reaches the level where all diseases could be cured (which is very improbable but let's allow it for a moment), one can still be murdered, and for a rich and potentially immortal person the risk here would be much higher than for a would-be murderer that has much less to lose. So the strategy you describe would be very foolish.
Great response. I just can't help but imagine an upperclass of immortals being extremely protective of their prospective infinite existence to the point that a large group of them would have no problem committing genocide to keep it that way. After all, it would be easy to cast mortals as a lesser being in a future in which others have become almost god like.
Hopefully there will be some sort of colonization of space at that point and they can at least leave Earth to us mortals.
Ultimately I just don't read enough sci-fi to have all the debate points down. I'm just trying to apply to marxist concepts to something which perhaps will have completely eclipsed Karl's reality.
For being godlike, mere longevity is not enough. And to commit genocide you need an army. Why the army would do it for you instead of killing you and dividing your enormous wealth between themselves? You'd need something more than longevity to manage that. Also, what would be the profit in committing genocide? You'd position yourself as an individual that is extremely dangerous and posing risk to everybody, so that more people would be willing to go to greater lengths to get the world rid of you. And longer you live, more chances are one of these people will get lucky. So why increase your risk? If I ever became an immortal rich man, first thing I'd do is to institute a huge philanthropic programs and advertise the heck out of them, so hurting me would be like hurting Santa Claus - only a completely depraved man would think of doing this, and if he did others would try to stop him. Why should I assume actual rich men are stupider than me and can't see it? After all they could see enough to actually become rich, unlike me - so I must assume their practical smarts exceed mine, and they won't behave in completely impractical way just to be evil.
Personally built and programmed by the said billionaire? Well, if he can personally produce, program and control an army of robots, he will be godlike. But then he also can produce an army of manufacturing plants to satisfy basically every need of the population. Why be a most hated being in the universe if he can be the most loved one?
Uhuh. Now here's the question though : which problem is easier, and will be solved first. Because the second problem solved will be stillborn.
a) make humans immortal
b) make a non-human conscious being that's immortal
I would argue that we're much closer to b), and that it's a much easier problem to solve.
Even if the above is wrong, an artificial lifeform would solve many more problems that merely immortality. How do you let humans operate in tiny spaces (e.g. in tubes with a diameter of 2cm). How do you let humans operate "in the large", e.g. moving 5000ton concrete blocks ? How do you let humans operate in unstable conditions (e.g. mining, handling explosives, on the battlefield, ...). How do you build a star-trek transporter (for computer programs, that's a solved problem). How do you keep them alive in space, underwater, in the ground ?
All of those are things where an artificial lifeform would have a massive advantage.
So I believe in the matrix prediction. We'll build artificial lifeforms. We'll get our collective asses handed to us by these lifeforms first economically, then militarily. And this lifeform will colonize the galaxy in ~1 million years.
Because this has really bad unintended consequences written all over it. When the people controlling a significant percentage of the world's wealth are more importantly concerned with their immortality than the well being of the rest of us, it will come out in many, many ways, and a lot of them won't be good for the rest of us.
Why you think that other people should be primarily concerned with your well being? I think your well being is your concern, and faulting some imaginary billionaire for not being concerned enough about you looks like extreme form of entitlement for me.
But I notice you can't name even one way that would be bad. All you can do is a vague prediction "oh, it would be so bad you won't believe". Not really convincing.
The amount of wealth controlled by top 1% in US stayed essentially the same for the last 100 years - about 1/3 (with some fluctuations, but roughly stayed around that). Some say US has an unusually high inequality, so if we take it as a top boundary, we can say that in the world top 1% controls no more than 1/3 of the wealth. Yet while one can say there are a lot of problems, no unimaginable horrors came out of it and society wasn't destroyed - on the contrary, life of virtually every person in all the strata of the society has become better.
"The amount of wealth controlled by top 1% in US stayed essentially the same for the last 100 years - about 1/3 (with some fluctuations, but roughly stayed around that). Some say US has an unusually high inequality, so if we take it as a top boundary, we can say that in the world top 1% controls no more than 1/3 of the wealth. Yet while one can say there are a lot of problems, no unimaginable horrors came out of it and society wasn't destroyed - on the contrary, life of virtually every person in all the strata of the society has become better."
I don't think that statistically follows - I've not done the math, but it reminds me quite a bit of Simpson's Paradox.
No harm done, I bet immortality will eventually come in the form of 'brain backups' in Google Drive and your physical body is merely a vessel which you can use to interact with the physical world.
Elysium is fascinatingly weird film - so they had all those ships packed with wonderful medical technologies and robots standing by on minute's notice and still refused to spare any of it because of pure spite, despite not even needing them at all, as they all have the same things in their homes, so only reason why they also had all those ships with medical robots would be to heal Earth people - but they never did, so why they built those medical ships in the first place? Just to taunt poor earthlings?
They have the technology to reassemble human body on molecular level and they still need millions of humans to work in degrading conditions to build those robots - because those molecular-precision technologies somehow not applicable to building robots? And yet despite their dire need of those workers - they not only dispose of them without second's hesitation but also allow their robots to mutilate them for no reason, thus degrading their productivity significantly.
This movie really doesn't led itself to any thinking - it just falls apart like wet toilet paper if you start to think about it.
But he's the sort of genius, it turns out, who's not very good at boiling a kettle. He offers me a cup of coffee and when I accept he heads into the kitchen to make it, filling a kettle with water, putting a teaspoon of instant coffee into a cup, and then moments later, pouring the unboiled water on top of it. He stirs the undissolving lumps and I wonder whether to say anything but instead let him add almond milk – not eating diary is just one of his multiple dietary rules – and politely say thank you as he hands it to me. It is, by quite some way, the worst cup of coffee I have ever tasted.
Slightly off topic, but this sort of guff makes me abandon a lot of articles in the first few paragraphs. In fact, I just did exactly that to come here and complain. It's little more than the writer exercising his/her own ego. I'd much rather they get to the point, which is what their interviewee has to say.
I'd suggest "The most human human". It's unfortunate that those calling themselves "futurists" somehow seem to think that being alive forever... is kind'o shallow.
Death may be inevitable, but hopefully those who preach from the pulpit, have a little more depth to them and have hopefully examined their lives more carefully than to simply say, I'll just live forever. In that sense, I think Jobs had the right idea. If life were "infinite" or even significantly prolonged (i.e. 10 times the current life expectancy), I think we'd have a lot of thinking to do to come to terms with such a new reality.
> If life were "infinite" or even significantly prolonged (i.e. 10 times the current life expectancy), I think we'd have a lot of thinking to do to come to terms with such a new reality.
I'd be happy to spend next few centuries thinking about this problem.
One thing about 10 x life expectancy would be that your attitude towards risk would become much more conservative. You have so many more potential years to lose if you fall off your mountain bike or your parachute doesn't deploy or your fast car skids off a road or your speedboat flips over. People with a 700 year lifespan would probably spend a lot of time sitting around indoors, I reckon.
Would they? I don't find myself thinking "I have only at most 80 more years to live, so I'll jaywalk here, risk be damned". Do people really do those kind of mental calculations? It might be me still being relatively young, but having 60-80 years in front of me feels more like +∞, and I have to keep constantly reminding myself that my time is limited.
I'd wager that the only relevant scale for people to think is weeks/month; there wouldn't be much difference felt between 60 years, 600 years and +∞.
People don't _think_ about it, but a sense of our time limit is part of our 'human condition'. e.g the phrase 'you only live once'. 'You only live once, but for 700 years' has quite a different feel to it.
Isn't our willingness to take risks connected with our inevitable, eventual, death? Without a guaranteed eventual death few would take risks without massive compensation.
Consider the need in human societies for revolution. With a massively long lifespan, no one will revolt when the need arises, due to risk of death. Leading to a pretty shitty state with no way out.
20 year olds are the ones that go to war though, aren't they? And yet they have the most life ahead of them.
Of course, everything looks bad if you only look at the negative aspects of it. Certainly risks would change. Life does seem like it would get more valuable. However, on the positives, we'd see huge technological progress - experts could be experts in their field for hundreds of years, rather than working for a short window and having to publish their results and hope that the next guy can pick it up and take it a few steps further.
So I feel that we'd end up in a society that is quite different than current human society - with plenty of their own problems. One thing that I do think though is that it would be very wrong to project our morality on this and accuse it of screwing up the natural order of things. Aging is just another disease.
There's a great short story along these lines from Larry Niven. I can't remember the name or the collection, but maybe somebody else will.
A spaceport bar's noise suppression system is on the fritz, so the human bartender keeps hearing random bits from a conversation somewhere in the bar. One alien, a DNA-based one, has come to sell their life-extension technology. The other points out innovative things that humans are doing, and speculates that it's related to the short lifespan. Eventually, the DNA-based alien agrees and decides to wait a few human generations just to see what we do next. The bartender looks around desperately trying to figure out which of the many tables it was, to no avail.
It's a commonplace among artists that constraints force creativity. A small example is the 6-word-stories thing [1]. An example more relevant to your comment is Kevin Kelly's life clock [2]. If you talk to people who have cheated death, you might expect them to talk about being more careful. But the ones I know all talk about being reminded to live fully while we can.
> Isn't our willingness to take risks connected with our inevitable, eventual, death?
Is it? Does anybody who takes risks really do any kind of mental calculations involving expected lifespan?
I might be too young, but having 60+ years in front of me feels almost like being immortal, and I have to constantly remind myself my time is short. But personally, I have never ever had the issues of death and short lifespan crossing my mind when thinking about taking risks or doing something creative.
Or you know, it's just another premature product from Google, to get news coverage as an "innovative" company by rehashing older stuff in not-marketable forms.
Like self-driving cars, computer-glasses, cloud-only-laptops and the like, all met with minimal success.
What's with this attitude? How can you be innovative without failing 9 times out of 10. That's what innovation entails! If you haven't failed while innovating you either was extremely lucky or you got yourself some divine superpower.
The iPhone wasn't new either - as in not a rehash of old ideas. But it was a new execution. These are all new executions.
>What's with this attitude? How can you be innovative without failing 9 times out of 10.
Easily, by not making PR announcements before you succeed.
Plus, I don't think is an absolute that it's necessary to fail multiple times to get something innovative. Take for example Xerox Park for research. Or, in the market area, Apple.
>The iPhone wasn't new either - as in not a rehash of old ideas. But it was a new execution. These are all new executions.
Well, there are new executions that are ready to succeed, and new executions that are barely held together with chewing gum and wire. I don't think it's any service to the public to tout the latter to high heavens.
I'm predicting a future where most people will live on in a virtual world with 'unlimited' lives. Just the brain will be kept alive in a box somewhere. Well, that sounds like Matrix, but I think it's pretty inevitable.
Or you know, where only a handful of people will live this way. In isolated, heavily guarded, areas. With tons of energy, food, toys, technology, medicine and the like.
And the majority will slave away and be harvested for work, organs, sex slaves and such.
You know, sometimes you need to provide a more realistic picture of the future (this is not totally unlike how people actually live in places like Rio or Russia for example, and even L.A. http://www.amazon.com/City-Quartz-Excavating-Future-Angeles/...).
I have to admit that I'm not a huge fan of Ray Kurzweil - he's one of a large group of people who believe that accelerating change will almost certainly be good. I think the singularity could be good, but it could also be really bad, and it's important to spend some resources on making sure it goes well.
MIRI (formally the Singularity Institute) has a mixed reputation around these parts, but after reading fairly widely over the last year I think they have the deepest thinking on the topic of AI. Here's a concise summary of their worldview: http://intelligence.org/2013/05/05/five-theses-two-lemmas-an...
As I see it, their argument goes:
1. It's tempting to think of AIs becoming either our willing servants or integrating nicely with human society. In actuality, AIs will likely be able to bootstrap themselves to superintelligence extremely rapidly; we'll soon be dealing with alien minds that we fundamentally can't understand, and there will be little stopping the AI/AIs from doing whatever they want.
2. It's tempting to think, from analogy to the smartest human beings, that superintelligent AIs would be wise and benevolent. In actuality, a superintelligent AI could easily have strange or bizarre goals. I find this makes more sense if you think of AIs as "hyperefficient optimisers", as the word "intelligence" has some misleading connotations.
3. OK, well surely we can leave the AIs with weird goals to do their thing, and build other AIs to do useful things, like cure cancer or research nuclear fusion? The trouble is that even an innocuous goal, when given to an alien superintelligence, will very likely end badly. An AI programmed to compute PI would realise that it could compute PI more efficiently by hacking all available computer systems on the planet and installing copies of itself. Or developing nanotechnology and converting all matter in the solar system into extra computational capacity. You have to explicitly the program the AI to not do this, and defining the set of things the AI should not do is a hard problem. (Remember that 'common sense' and 'empathy' are human abilities, and there's no reason that an AI would have anything like them).
4./5. OK, well, we'll build an AI with the goal of maximising the happiness of humanity. But then the AI ends up building a Brave-New-World style dystopia, or kidnaps everyone and hooks them up to heroin drips to ensure they are in constant opiated bliss. It's really hard to come up with a good set of values to program into an AI that doesn't omit some important human value (like consciousness, or diversity, or novelty, or creativity, or whatever).
I'm glad that Peter Norvig (director of research at Google) is concerned about the issue of friendly AI. I'm curious to hear what other HN readers think of these ideas.
Anticipating some common objections I hear from friends:
How could a superintelligent AI have a stupid goal like computing Pi?/Wouldn't it be smart enough to break any controls we put on it?
I think this objection assumes an AI would be wired together like a typical intelligent human mind. If you think of an AI as a pure optimisation process, it's clear that it would have no reason to reprogram the ultimate goals it begins with.
If they're smarter than us, we should just let the AI take over/AIs are like our children, ultimately we should leave them free to do whatever they want
Again, this assumes the AIs are like super-powered human minds and that they will do interesting things once they take over, like contemplate the deep mysteries of the universe. But it's clearly possible for the AIs to devote themselves to really trivial tasks, like calculating digits of Pi for all eternity.
Why? Why should we accept on an "emotional level" that we are about to die? Just because it's currently "inevitable"? Seems like a cop-out to me. I think humans are meant to be better than just "accepting their fate", and that we should always try to improve our lives and conditions.