Hacker News new | past | comments | ask | show | jobs | submit login
Is This Economist Too Far Ahead of His Time? (chronicle.com)
136 points by Hooke on Oct 19, 2016 | hide | past | favorite | 116 comments



The book’s premise is that in about a century, it will be possible to scan human brains at "fine enough spatial and chemical resolution," and to combine that with good-enough models of signal-processing functions of brain cells, "to create a cell-by-cell dynamically executable model of the full brain in artificial hardware, a model whose signal input-output behavior is usefully close to that of the original brain."

Like Bostrom and Kurzweil I think this guy is seriously underestimating the difficulty of biology. I don't believe in souls or non-computable woo going on in our brains. I do believe that biology is insanely complicated compared to computing or physics; physicists, engineers, and people from a computing background who've never worked in biology tend to handwave incredibly difficult problems. See also: the long and mostly-unsuccessful history of people who thought that they could replace animal/cell culture drug testing with models in silico.

On top of the sheer audacious difficulty of creating whole brain emulations, there's the problem of competing with boring old "narrow" AI. Surely an economist of all people should recognize the feedback loop that makes narrow AI profitable first, which attracts more investment, which unlocks more capabilities without biological emulation... how many of today's jobs are going to be left un-done by narrow AI a century from now, even if whole brain emulations are finally practical then?


I think you are missing the main thrust of the book. He is not trying to convince you that em-based AI is the likely future. He chose the less examined possibility as a starting point, but the main part of the book is to enumerate the predictions that flow from that premise.

This is interesting and worth examining even if the premise is wrong. Sometimes examining a familiar topic in unfamiliar terrain yields new insights.


"False premise" is a logical fallacy, not a virtue.


A false premise isn't a logical fallacy. Arguments can be logically valid yet wrong, and vice versa.


Right, and taking a premise you believe to be false and mapping out its consequences has a long and noble history; e.g., non-Euclidean geometry.


Okay then...if not a logical fallacy, then just a fallacy ("mistaken belief").

The sort of prognostication that would do poorly on a debate team, because its premises are unsound.


You don't have to think that this is the most likely scenario to think this is a very important book. Even if the chance that em AI beats ab initio AI is only 5%, it's a dramatically different and important scenario to consider.


I attended a primer lecture on tbe immune system. Holy hell. At least 200 different types of T cells. At least 200 other types of immune cells. Then there are all their interactions, signaling, locating, memory functions. I honestly dont know if one can ever get a handle on the complexity.


It is not complexity, but diversity. The functions those cells perform are mostly understood as are most mechanisms by which they work. There are many variants of a similar function and mechanism.

That said, immunology is still quite young too, but compared to it neuroscience is not even born.


For a large part we understand how two particles can interact. Simple rules.

Put together 200 of those particles, predictions become very difficult and complex.


Alan Kay's inspiration for OOP and message passing seems to have come from working in microbiology.

Maybe thinking/finding about how biology does its computation will be important for machines that can scale very well?


I think a lot of these predictors underestimate prediction itself. To their credit, there is a lot of emphasis on trends in economies or technologies. That's probably the best tool for predicting, but predicting is hard. I think it's hard to really know how complicated "biology" is, from the current perspective of mystery. Who knows what'll be known decades from now.

I like Elon Musk's perspective as a starting point though. The trends that all these predictions rely on are man made. Technology improves, gets more powerful and more prevalent when smart people work on it hard. It's not a physical force, it's caused by people.

Anyway, if you are expecting narrow AI to do so much, who knows what the relationship between that technology and "true" AI (whatever that means exactly) is. It may not matter much, as you say.


> I like Elon Musk's perspective as a starting point though

what is elon's perspective ?


He talks about the time estimate in the book. He says he understood the need for a time estimate, and so gives the one he thinks is best. But the important thing is that he feels confident it will happen before artificial intelligence that isn't full-brain emulation, whether that's in one century or four centuries.


"confident" is too strong. I think the chance is high enough to make the premise worth considering in this detail.


> On top of the sheer audacious difficulty of creating whole brain emulations, there's the problem of competing with boring old "narrow" AI. Surely an economist of all people should recognize the feedback loop that makes narrow AI profitable first, which attracts more investment, which unlocks more capabilities without biological emulation... how many of today's jobs are going to be left un-done by narrow AI a century from now, even if whole brain emulations are finally practical then?

This is a curious blind spot. With deep learning algorithms making easy work of master Go players, there has to be enormous untapped potential in GPGPU silicon for AI.


He's stated elsewhere he thinks the chances of this scenario are only ~30% IIRC


Based on what, though? The hazard of dealing with Bayesian epistemologists is that they will confidently state percentages that they literally made up.


Based on his own personal belief. Probabilities are a very useful way to precisely state the certainty of beliefs.

It means that if you offered him betting odds that it would happen, of less than 1:2, he would take it. And he would bet against it at odds higher than 1:2. Of course betting on the far future is impractical, because money might not be worth anything if it does happen, but in principle that's how these numbers can be interpreted.


Yes, I know where Bayesian epistemologists get their numbers.

The danger is that they then announce these numbers as if they have any real-world basis, and that they are taken as making a claim about objective reality.

There's no evidence I know that pulling a number out of your backside and running it through Bayes' theorem produces better results than just pulling the number out directly and running with it. Indeed, the real-world track record of LessWrong/MIRI (to bring this back to topical relevance) strongly suggests it's less effective.


He didn't claim it had "real world basis" or "run it through Bayes' theorem". He just said he expects about a 30% chance he is right.


Is that different from any other epistemologists?


Yes, biology is terrifically complex and there exists a near-universal tendency to oversimplify it.

However, the rate at which humans are improving our ability to understand terrifically complex systems is itself accelerating and 100 years is a very long time.


The initial stages of a sigmoid curve resemble exponential growth. I'm pretty sure that the exponential character we've witnessed is the result of miniaturization of the transistor and the industrialization of large third-world economies, both of which are running out of steam. If we don't get some major new breakthroughs, expect to see the other side of that sigmoid soon enough.

We may see another brief exponential curve with the development of carbon nano-computers, likely netting at least an order of magnitude additional computing performance. I have strong doubts about the feasibility of general purpose quantum computing (though it will definitely have niche uses).


If you extrapolate from the last hundred years of progress to the next hundred, it's plausible that there will be at least as much technological progress as these ideas would require.

The problem with such prognostication isn't so much that these things can't happen. It is that unexpected events that will happen first turn the future in a different direction.


There are others who think it an important possibility: http://www.fhi.ox.ac.uk/wp-content/uploads/brain-emulation-r...


I was involved with a research effort to predict certain cell biology features via molecular dynamics simulations between 2003 and 2006. It's what I wrote my thesis on. After school was done and my research group's grant ran out I started writing software for a living instead of looking for another research position.

I still keep up with molecular and biological simulation research efforts because it's intrinsically interesting to me. The models are improving and better results are also possible due to the increase in computing power since 2006. But the progress appears slow compared with AI/ML research over the same period.

Any time covalent bonds are made or broken, quantum effects are at work. Eliminate all the chemistry that involves forming or breaking bonds and you're throwing away most of the chemistry in a living organism. Getting quantum effects right with ad hoc modifications to molecular dynamics models based on classical physics is very difficult; there's a lot of risk of overfitting to what's easy to observe. Full quantum calculations have terrible scaling properties -- O(N^7) for CCSD(T), the most popular high accuracy approximation of QM, where N is proportional to the number of electrons in the system. Even the best, slowest quantum methods (CCSD(T) and even more exotic methods) still have many cases where they fail to reliably reproduce experimental results.

It's possible that you don't need to simulate anything down to the molecular level to reproduce all the biologically interesting features of cells. I don't want to rule it out completely. But the enormous financial incentives to simulate biology in silico (for e.g. accelerating pharmaceutical development), the large research investments made so far, and the limited progress to date tend to make me think that all those tiny and extremely computationally expensive details at the molecular level are biologically relevant.


This has been my main objection to brain inside of a computer thinking. If we could simulate the universe inside of a computer we could do all kinds of neat things, such as drug development in a computer, but it isn't algorithmically possible. The Kurzweillians are just assuming that we'll wave the magic wand and have efficient algorithms for everything such that computer simulation of reality will be trivial. If you have an exponential complexity algorithm that is in the middle of your problem like protein folding for example it will NEVER be practical to simulate in a computer on any non-trivial problem.


I don't want to rule out better algorithms developing very good approximations for what currently appear to be daunting problems like protein folding. But there are a lot of problems like folding out there where even a century of Moore's Law marching on wouldn't be enough to solve large instances with today's algorithms. And Moore's Law looks like it's going to do more shuffling forward than marching forward in the future.

So basically I agree with you: there's no generic march-of-progress that assures a world of Kurzweillian wonders. I want to see more concrete evidence of progress in relevant areas before I start entertaining life after the Singularity. (I expect a future profoundly shaped by many "narrow" AIs rather than AGI precisely because there has been so much concrete progress with the former compared to the latter.)


This brings to my mind an interesting line of thought that I ran across via Peter Watts writing about it in Blindsight [1]. In that link, page down to the "Sentience/Intelligence" header, and read that section for a concise introduction to this line of thought: sentience is separate from intelligence. Looking at what computer science research and industry has accomplished thus far, I'm very open-minded about the idea that sentience (feeling and perceiving) is a separate, older, deeper layer of brain processing from sapience (reasoning and intelligence).

I suspect at a layman's comprehension level that sapience is mostly what we are reproducing in research and industry today, and not sentience, because sentience is far more difficult to plumb. Your explanation of the challenges in quantum methods of simulation and the uncertainty on whether it is even necessary seems to bring up this line of questioning to me. This has interesting ramifications. Like we can build OCR and even put out of work lots of formerly middle-class attorneys who made their living in trial discovery phases. But we have yet to create a second Industrial Revolution with general purpose humanoid-form robots that everyone can own that can clean houses, mow lawns, fold clothes, make beds, wash and chop vegetables, pick strawberries, etc., all supposedly "simple" tasks.

http://www.rifters.com/real/Blindsight.htm#Notes


I think this is right on. People use rational thought to evaluate decisions, but they use emotions to make decisions. If you take away a person's ability to feel emotions, they become paralyzed with indecision. There are at least a few real-world examples that have been studied.

There's a strong argument that our rational intelligence evolved to serve the drive to survive--and that emotions are the signals of that drive. It's becoming more clear that even though most animals aren't capable of symbolic reasoning and language, they do experience emotions and make decisions accordingly.

Who is programming a computer to feel emotions? I'm not aware of any significant work in the direction. (If anyone is, please link it!)


Too be fair, labor for those tasks is so cheap to make robots uncompetitive price wise.

The money is in putting that lawyer out of work nor hector, the immigrant strawberry picker


>* Who is programming a computer to feel emotions? I'm not aware of any significant work in the direction. (If anyone is, please link it!)*

Affectiva is working on reading emotions, and there is some philosophy work on whether or not an AI can feel emotions [1]. A Russian group claims they will release an emotional robot around 2018, but scant details on what that even means [2]. More scholarly work [3] [4] [5] is ongoing, but slow going.

[1] https://en.wikipedia.org/wiki/Philosophy_of_artificial_intel...

[2] http://www.dailymail.co.uk/sciencetech/article-3680874/Scien...

[3] http://www.sciencedirect.com/science/article/pii/S0747563204...

[4] http://link.springer.com/chapter/10.1007/978-94-009-2792-6_1...

[5] https://books.google.com/books?hl=en&lr=&id=OqbMnWDKIJ4C&oi=...


I'm not sure what you are responding to, but general-purpose humanoid robots that could be shown how to do things like picking strawberries (rather than a purpose-built strawberry picker) would literally be the world's biggest industry by staggering margins overnight.


The money is in both, and arguably there are greater knock-on effects from automating "labor for those tasks is so cheap to make robots uncompetitive price wise".

A humaniform robot imbued with sentience and what little progress we've made on (emulating?) sapience is sufficient to substantially (not entirely replace) disrupt the human labor demand from the following industries.

  * Residential lawn care for Single Family Residences, a $10B annual industry in the US alone. http://bit.ly/2eu4Ulm
  * Residential maid service, a $30B annual industry in the US alone. http://bit.ly/2eSQ3oL
  * Residential home senior health care, a $180B annual industry in the US alone. http://bit.ly/1ufk95Y
  * If residential meal prep and cooking were mostly or possibly entirely automated, then that can convert the personal chef
    industry from hundreds of millions into one that poaches a significant portion of the $780B fast casual industry.
    http://bit.ly/2dJweO0
That's just four industries, with a combined annual revenue of $1T in the US alone, with tremendous consumer demand. There are lots of other possible industries that can be disrupted with such a "simple" robot, especially in the commercial sectors, and globally. It is not a stretch to say such a robot is a multi-trillion dollar opportunity.

Suppose you offer a $100K humaniform robot with OTA software updates that can perform most household chores, geofence young children into a specific locality, and basically the sense range, kinesthetic sense of a human, object recognition depth of current AI software, but with only the sapience of what we associate with AI today. Amortize the payments over 30 years, and I expect a billion households will beat a path to your door and happily pay that amount.

It would be hard to predict the knock-on economic and other effects of the development of mass-produced personal household robots. Probably similar to trying to predict the demand upon lithium supplies a century from when the Model T was introduced. Perhaps something similarly unexpected like divorce rates dramatically drop due to fewer domestic disputes over household chores because most of those chores disappear.

My thesis is that the people building robots are not dumb, and already realize the financial potential in figuring out sentience, and already are aware at some level that the sentience I speak of is incredibly difficult to build because we understand so little about it, and the AI field focuses upon coming in at the problem from the sapience angle/level for the good reason that sapience is closer to our current form of computer technology than sentience.


I briefly worked on a molecular dynamics project which was for small RNA fragments with atoms numbering in the hundreds. It needed quantum effects simulations to understand two states of the molecule, which could not be explained otherwise. It also required clusters of computers to finish. Modeling proteins is an entire different game. Especially considering the infinite possibilities of proteins and the billions of different proteins that exist in the wild.


Anyone that hasn't yet, check out Black Mirror on Netflix. It has a lot of ideas in it that I am surprised to see in a show in 2016.

I love Elon Musk but I think he is solving the wrong problem when it comes to making human beings a space faring species. The fragility of human flesh is the barrier to colonizing space. You can try to solve this problem by wrapping it in a contained atmosphere or wait a thousand years and send humans to Mars in a different kind of space ship. One made of metal and silicon that can recharge its batteries so long as it has a line of sight on a distant star.. maybe as the article says: "a robotic body [standing] roughly two millimeters tall"

The greatest challenge we face is keeping this incubator called Earth alive long enough for us to develop the technology to abandon our flesh. But I imagine if that day ever comes what it means to be human will have been destroyed in the end anyways.


>or wait a thousand years and send humans to Mars in a different kind of space ship.

Pardon the pun, but why on Earth should we wait 1000 years (or whatever it actually takes) for something when we don't have to?

Our fragile human flesh in a contained atmosphere works well enough and we're roughly capable of doing it now, give or take a couple decades.

Then we have 900+ years to improve the containers we use to maintain our preferred atmosphere while waiting for whatever form this futuristic species you describe takes.


Basically we can't survive space radiation... Even going to Mars.


We can though. Using today's technology we can easily go to Mars and live in underground habitats and be subject to only a moderate increase in health risks from radiation absorbed during the trip. Said health risks would be less than what a decent number of current Earth-bound professions face, and those people don't even get to go to Mars.


> easily

What we are currently doing on Mars remotely is absolutely incredible and at the upper limit of our present capabilities. Don't think for a second that having "today's technology" does not still require a herculean effort, if we even have a 5% chance of pulling it off before funding (which would be in the trillions) is cut.


I was specifically refuting the claim that the radiation problem is a blocker, when in fact it is just a minor nuisance. Storms posed more of a danger to the first ships exploring the new world than radiation will to Mars colonists, and that didn't stop anyone.



Do you have a link to a specific page within that report demonstrating your point? I skimmed through it and all I saw is that it says that there are some risks. Obviously I'm not denying that. It doesn't say at all that the risks are so high that we can't do missions with current technology (which provides minimal to no radiation shielding in transit). Mainly it's saying that further study is merited, which I absolutely agree with. I also think we don't need to hold off on going until we achieve perfect mitigation, and I believe most NASA employees would agree with me on that as well.


I think brain-upload is probably the only way to do interstellar travel, even if you synthesize a new body on the other side. For someone alive today, if you want to do interstellar or non-colonization space travel, going after immorality is the best option IMO.

I don't think that earth-preservation is a worthwhile quest on a personal level. There are dozens of people who could destroy the whole planet since the invention of nukes. And with climate change for example, any kind of proportionate action would require an agreement between giant (100M to 1B+ people) political organizations to voluntarily makes themselves less wealthy -- although making tech that is greener and cheaper/better is probably the most significantly helpful option.


> going after immorality is the best option IMO

Now there is an ironic typo/autocorrect, I think the word you where looking for is immortality, unless of course you are making a scathing criticism that the pursuit of self preservation for the purpose of interstellar travel is against the laws of morality.

Thanks for a chuckle.

I do think though that emulation of "specialized" hardware with generic hardware is a complex problem space. While it's now possible to recreate classic gaming systems with fpga's for example, some cases we are able to only offer an approximation. The computing power of the host exceeds the power of the emulated CPU by several orders of magnitude. My thinking is to emulate wetware with hardware will require hardware several orders of magnitude more powerful than wetware and I suspect the wetware is already close the physical limitation of the universe for several key problem spaces.


*immortality (oops :S)

Sure, I don't necessarily disagree with anything you're saying, and given all that, I still think it is the best chance.

#1: in the short term, the quantity of hardware doesn't matter. If you needed a full data center costing $10M per year, it would still be worth it for some people. The idea is to get into a host that is replaceable, and then you can work on making it smaller.

#2: in the short term, you don't need to run in real time. If you are just trying to survive until better tech comes along, running at 1% means the equivalent of 3.65 days per year: maybe enough to survive until hardware improves. Taking this to an extreme, 100 people could pool resources for protection and tech improvement while staying in deep storage and only running their minds very seldom.

Given the above two points, all that really matters to me is that I can find a way to upload my mind into binary data which I know can be simulated faithfully. All I need is upload, storage, and a recovery plan. If the recovery plan looks like it will take 1000 years, fine, I still prefer it to nothing. If I think there is a 1% chance of success over the next 1000 years for immortality, I believe I will do it.

#3: we know for a fact that the functionality of the brain/body is achievable in the space/power envelope of the brain/body. I could easily tolerate a 100x space and power increase. The human body is approximate 100 watts and 100 litres. So 100x is 10,000 watts and 10 cubic meters: 13 horsepower and the size of a compact car -- fine by me. I think I could survive immobile and incorporeal for 100 years with a good library, if, for example, I had to be anchored to the seabed somewhere and run off geothermal energy. The more you expand your lifespan, the more options open up. If you can run yourself at very slow speeds an immense amount of options open up.


For #2: why does your mind need to run in a simulation at 1% rather than just wait in cold storage until the technology allows for it to be run at 100%?

And then, what if I made the argument that because the universe will inevitably repeat itself again, it is technically its own form 'cold storage' that will one day awaken again to simulate you..?

This is how I've found a little peace with death. That given there is an infinite amount of time, why can't it happen again? It's already happened once...

So after I/we die, an unfathomable amount of time will pass in an instant. Our sun will expand and devour the Earth. Eventually the universe will collapse. But after a universe of universes have swirled around each other, being born and dying, one day - when the right conditions are met - we will awake again?

I think it begs a bigger question though. Should this life, should/does humanity deserve to be repeated infinitely? When you add up all of the pain and suffering and weigh it against the beauty, is it worth it? For now I think it is.


I would consider cold storage to be preferable, but running at a low speed might be necessary for self-defence and what essentially amounts to data verification, the ability to rebuild to a certain redundancy level.

In my view if time loops immutably I gain nothing: same program, same input, same result.

I fully expect to die and I think I am ready for it, but I'm going down fighting. In terms of abstract moral questions, ten years of Catholicism cured me of any curiosity I had for that sort of thing.


I recently discovered Black Mirror.

It's a great and fascinating series. Finally something other then the mind numbing pabulum that is most TV.

I especially loved season 1 / episode 2.

---

The way our brain works, all experience, self consciousness and awareness is intricately linked to physical perception. If we were to go the 'brain in a petri dish' route, we would have to fake all those signals a body would send to the brain. If we just start emulating the brain physiology digitally, that is easier of course.

But I would not call the result of either method a 'human being', but something inherently different.


Watch Utopia.

http://www.channel4.com/programmes/utopia

I judge it better than Black Mirror. Don't google any information about it because it will sound stupid, just watch it outright.


What you describe is the central idea in 'I of the vortex: from neurons to self' by neuroscientist Rodolfo Llinás.

Self consciousness and awareness is linked both to physical perception and to the prediction and anticipation of said perception.


Just read on Black Mirror; it seems interesting. Thanks for the recommendation. For those seeking more knowledge on DNI and the validity of cryonics as an elixir of life, I highly recommend getting a copy of Michio Kaku's book, The Future of the Mind.


There is a TV series better than Black Mirror believe it or not. I'll shill for it because it's an amazing work of art.

It's called Utopia, it's a conspiracy thriller from Channel 4, and if it doesn't send chills down your spine and raise your hackles then you aren't alive.

http://www.channel4.com/programmes/utopia


Living in VR, and the implications thereof, have been written about in SF for decades. The earliest work is "Simulacron 3", from 1964. It's been done to death in anime. "Sword Art Online" is probably the best known stuck-in-VR anime, but "Log Horizon" deals with the philosophical aspects in more depth. Hanson is not breaking new ground; he's recycling SF as economics.

Read Hanson's list of "wild ideas".[1] "By 2100, the vast majority of "people" will be immortal computers running brain simulations." (And the rest will be keeping the server farms going?) "If we keep writing down common sense datums until 2100, we can make computers as smart as people." (After 30 years of Cyc trying that, that's probably a no.) "If we allowed complete freedom of contract, law could be privatized, to our common benefit." (He's at George Mason University, which is a right-wing think tank, and has to say stuff like that to get tenure.)

[1] http://mason.gmu.edu/~rhanson/wildideas.html


> Hanson is not breaking new ground; he's recycling SF as economics.

World-building for purposes of entertainment generally has little to do with successful prediction. Hanson has made this point in a few places, eg, http://www.overcomingbias.com/2015/11/science-fiction-is-fan.... I'm not sure if you've read Age of Em but it didn't feel like reading scifi, so much as reading an encyclopedia.

RE: the list of "wild ideas." He estimates that maybe a third of them are true. Quoting a few and acting shocked that that's the case is not a very epistemically hygienic criticism.

BTW if anyone wants to read a summary that carries some of the tone and emphasis of the book itself, https://casparoesterheld.com/2016/08/30/the-age-of-em-summar... is pretty good.


> World-building for purposes of entertainment generally has little to do with successful prediction

Do you have any evidence that that's true? A decent number of ideas portrayed in science fiction have come true, so clearly successful prediction is going on.

Ultimately, economist or author are both using imagination and extrapolation. I don't see any particular reason to expect better results from the economist.


Unfortunately no, I don't have good evidence.

It's an instance of a more general intuition that excessive consumption of fiction distorts people's beliefs about the world. For example, at some point I saw a study that showed people who watched more TV tended to overestimate rates of violent crime significantly, but I can't find it now.

Anecdotally many people seem to think that human genetic engineering would create permanent class divisions, and they tend to refer to fiction, eg, Gattaca, when asserting this. But that doesn't really make sense. Any rational nation-state would subsidize it massively for the population once it was cheap enough.


> clearly successful prediction is going on.

Throwing out all possible wild ideas, some of which happen to come true, isn't what "prediction" means.


Given the poor prediction reputation of Economists, their attempts probably match your description as well.


> (He's at George Mason University, which is a right-wing think tank, and has to say stuff like that to get tenure.)

Hanson was an extreme libertarian long before he got tenure or joined GMU, or even before he invented prediction markets.


I know. I think I met him when Xanadu was being developed. (Xanadu was a pre-WWW technology for storing and distributing hypertext documents. Everything is pay per view in Xanadu. The Xanadu crowd were mostly fanatical libertarians. Everything is a market, with micropayments for everything.)


Does that includ Ted Nelson himself?

My interest in Xanadu has just dropped a few orders of magnitude.


Nelson's vision was something like a wiki, where everybody paid to read, and anyone could edit or fork the document. You got paid for how much of your stuff was read. Here's part of his original paper.[1] Towards the end of that paper, you can see him describing something like Github, with all the branching stuff, but with a better UI and intended for text documents, not code. This was in 1974. He was way ahead of his time.

He tried to architect a system to do this, and it was insanely complex. It had strong internal consistency requirements, so it wouldn't scale out or parallelize well. It had explicit links all over the place, which was how people thought about databases back then. They were combining the application logic and the database architecture, which resulted in a horrid mess. Today we know to decouple those. Github is built on top of a key/value store. Wikis are built on an SQL database. Works fine.

Eventually, in 2014, there was a working demo of Xanadu. Here's a view of religion in Xanadu format.[2]

[1] http://www.newmediareader.com/book_samples/nmr-21-nelson.pdf [2] http://xanadu.com/xanademos/MoeJusteOrigins.html


>(And the rest will be keeping the server farms going?) They will probably figure a way out to speed up the simulation faster than 100% and use it to develop weapons to kill each other until only the humans in the simulation are the only survivors.


My favorite writer wrote a great book review of Robin Hanson's seminal work 'Age of Em" here

http://slatestarcodex.com/2016/05/28/book-review-age-of-em/


I once wrote a book in 2005 about the inevitable fusion of central banking theory, economic psychology, and "good-enough" artificial intelligence. Whole brain emulations weren't required. Instead, mass expansion of mobile devices and the consolidation of civil behavior into those devices (purchasing, voting, socialization, documentation, research, institution interactions, welfare benefits, etc) allows a sufficiently powerful enough AI to read an NSA trunk and be able to create a model of the "animal spirits" of the individual constituents of markets. It couldn't model human behavior accurately, but that wasn't required. Instead, it could create a chaotic chain of events between all actor demands and capabilities via small actions that would result in massive economic policy enforcement.

Instead of invading Iraq, the machine predicts a convoluted path of investment schemes that ultimately bankrupt the dinar, the ruling party, and the entire nation. No bullets fired. No soldiers deployed. To economists, it would look like normal macroeconomic activity.

I never released the book because I realized I was writing a business proposal.


That sounds really interesting. I have also thought about how "good-enough" AI could get a lot more done than many predict. I also like the idea of a Rube Goldberg equivalent of investment schemes and options to achieve certain objectives. Write a fun versions of it first, then do the business proposal. Good luck :)


I once had ideas like that. Let me know how it works out. I suspect you'll find it hard to predict much at all let alone influence.


I would be highly interested in reading such a book. Any drafts and/or links available?


> You are beautiful, intelligent, and charismatic, as are your friends, co-workers, lovers.

Aren't all of these qualities measured relative to our neighbors? I.e., each of us is a genius compared to our ape forebearers, but if everyone around you is beautiful, intelligent, and charming, then no one is.

> You feel no hunger, no cold, no heat, no pain.

Again, isn't discomfort relative to what one is prepared for? Under such circumstances, wouldn't the slightest dip or peak in subjective well-being feel like a life-altering crisis?

"Connoisseur" is relevant https://xkcd.com/915/

Am I missing something, or is this just another economist who understands nothing about human desire?


You can make a case that charisma only matters in contrast to its lack, and perhaps beauty is the same although I'm not as certain, but intelligence has an intrinsic value that ought to be evident by the fact that we built a technological civilization with it.


That's a good point, I'll just say that no one will perceive themselves or their friends as intelligent, except relative to their competitors.


Those qualities are measured relative to us, present humans. He's saying the Ems will be much smarter, beautiful, and charismatic, compared to humans in 2016.

>wouldn't the slightest dip or peak in subjective well-being feel like a life-altering crisis?

Well that's much more debatable. I think to some degree suffering and pleasure are on an objective scale. Maybe it has to do with how much dopamine or pain nerves are triggered, probably it's much more complicated than that. Certainly I would give a lot to live in a world where there was no risk of getting some illness or accident that could cause intense pain or suffering, yet otherwise everything else was exactly the same. Wouldn't you?


> I would give a lot to live in a world where there was no risk of getting some illness or accident that could cause intense pain or suffering, yet otherwise everything else was exactly the same. Wouldn't you?

Not really. I would argue that life is suffering, or at least entails suffering. Without physical suffering, you would suffer psychologically, emotionally, and spiritually - perhaps to a far greater degree. I strongly suspect that the confusion and malaise which afflicts modern man, compared to his tribal ancestors, is correlated with a reduction in physical hardship.

So no - while removing a specific source of physical suffering may sometimes be prudent, in general I would not sacrifice anything I truly value in a vain attempt to avoid suffering in general.


You're missing something by bringing up "human desire". Within the post-human fantasy there's no reason to not fix the bug that your happiness is only measured relative to other's misery. Or to present an altered set of peers to each individual so that they feel relatively successful, or any other number of solutions that become trivial once you accept the premise.


Oh, you and your pesky "wait, but value isn't absolute" observations.

https://archive.org/stream/lectureonnotiono00lloy#page/34/mo...


Failure to understand the relativity of value is perhaps the primary logical fallacy I see every day. There was an article on this site recently about someone doing their part to "promote equality" by helping someone receive an "elite education" (who otherwise couldn't have had one). Yet isn't the concept of an elite education - by definition - something that can only exist when some people are denied the opportunity to have one?

In reality, patting oneself on the back for helping one person to succeed is exactly equivalent to hoping another person fails. The human instinct for hierarchy is deep and generally unacknowledged.


Quite. One of only a very large class of such failures of understanding and comprehension.

There's something to be said for seeing, say, that everyone capable of benefitting from a high-quality education gets same. But that excludes consideration of those who wouldn't benefit (at least educationally) from the opportunity.

... there being other modes of benefitting from education, including networking, contacts, and exposure to the inner sanctum.


Hanson's thoughts on the profile: http://www.overcomingbias.com/2016/10/profile.html


The Age of Em can be read in two ways. One is to treat it like an RPG setting book. An imaginative world with highly detailed descriptions, complete with cool potential plots for your campaign. The second way to read it is as a book trying to predict the future in much the same way as The Flintstones is recording history. Everything is strange yet it's all mimicking the current world with superficial differences.

The chapter about computational reversibility was really annoying - good luck with it if you're not a CS academic. I'm a software engineer and I only have a vague idea what it is and unless you know already you're out of luck as the author will not explain anything.

The rest is ridiculously specific descriptions of a future world - a bit like trying to forecast weather 20 years in advance by trying to get the exact temperature, rainfall, etc. on a particular Thursday in March, in your back garden. This is why I'm going with the former way of reading this book. Maybe someone will write a fiction book based on this setting - could be interesting.

One last thing: the author mentions that economists often predict the future correctly but the irrational^H^H^H^H^H^H^H uncoordinated populace fails to realise the correct prediction (a comment I've read before made by other economists), implying that the only way he can be wrong about his predictions in this book is if we all somehow conspire against him, at which point it's all our fault. AI research must be a traumatising field.


Well I agree that the specific predictions are weird. Hanson spends a whole chapter at the begging trying to justify them. The main point is that it doesn't matter if all the details are right. Humans are better at seeing a specific image of what a possible future might look at, than trying to sample from the entire probability distribution of possible futures.

It's less like trying to forecast the weather in a backyard. And more like painting a picture of that backyard in 20 years. When all the trees and plants grown up, and imagining what they will look like. Imagining what the needs of that older family will be that they have aged. Maybe their kids will grow up and the playground equipment will be gone. Or they will have a ramp instead of steps to accommodate a wheelchair.


"...economists often predict the future correctly but the irrational..."

This is the Overcoming Bias/Less Wrong community worldview basically, and I find your RPG setting reading works well for understanding them as well. From now on I'll think of them as writers who are predicting the future of an alternate universe where humans think fundamentally differently than in our reality.

It's a hell of a lot of fun, if you model homo economicus instead of homo sapiens it actually is possible to reason about large societies and at least attempt to predict far future outcomes to some degree. Plenty of ways to hand wave away the reasons that you can't "predict" historical events or the present with these models, that's boring anyway, the future has cooler toys.

If you accept that a chaotic soup of memes can, at any time, combine to form an idea that becomes a popular ideology that prompts a bunch of clever apes to act anywhere from irrational to completely insane for arbitrary amounts of time ... well that's impossible to model in any meaningful way.

The aversion to history makes a lot of sense, my own interest in history and sociology is what showed me the holes in this worldview and now I can't play this fun game with the same immersion they do.


>If you accept that a chaotic soup of memes can, at any time, combine to form an idea that becomes a popular ideology that prompts a bunch of clever apes to act anywhere from irrational to completely insane for arbitrary amounts of time ... well that's impossible to model in any meaningful way.

I think it's pretty silly to accept homo economicus as an actual definition of rational behavior, since nobody really wants to live like such a creature.


Really? I think we must have different understandings of the term, it just means that people decide what they want and then act in the way they think will get them that.

Also, I'm not sure how the quoted part relates, that was my facetious description of how Homo Sapiens act, in contrast with Homo Economicus.


>I think we must have different understandings of the term, it just means that people decide what they want and then act in the way they think will get them that.

Quite the opposite, actually. Homo Economicus has two things homo sapiens does not, and cannot:

* A fixed-a-priori "utility function". Things it wants before it has experienced life at all, with those "things" being linearly convertible into money.

* Model certainty about the transactions it enters: risk is accounted for using probability, but the model structure is, again, known a priori.

When you weaken these two restrictions in econometric and psychological experiments, you recover useful descriptions of actual human behavior. But those restrictions were ridiculous and unrealistic in the first place, unless you pull an economist's typical move and assume that a person operates like an investment bank with a fixed charter (ie: the fixed goal of accumulating capital) and a fixed set of actions (ie: available equity investments).


Yup, although that seems to be a very specific version of the general Homo Economicus term.

Do you think I find the idea of Homo Economicus convincing or valid? I was arguing the opposite.


Speaking of fiction, it sounds a lot like The Quantum Thief by Hannu Rajaniemi, more in the "forking your mind repeatedly" respect than the 2mm robots, though the tiny people are there, too. Reading this article shortly after The Quantum Thief was actually a bit uncanny.


There is this comedian on the radio were I live that says that economics is the science of explaining today why yesterday's predictions were wrong :P

I can't believe that he says that his errors are our fault...


Is anyone else having a hard time disambiguation which of his models and predictions he holds genuinely, and which are merely exercises in reductio ad absurdum attacks on the dysfunction he sees in the establishment? I'm all for free thinking, but this seems all too convenient to hedge his bets.


I like the main point he is trying to make (which I see summed up by the cliche "think outside the box") but most of the ideas in this article are such stereotypical notions of turning everything on it's head: Aliens? AI taking over? Robots taking over? Us living in a simulation? Challenging the concept of rape? Can we stop pretending these are cutting edge and out there? Can we stop pretending these ideas don't have very old precedents? He has come up with unique ideas on these themes, but no he is not "ahead of his time" I would argue in many ways he is stuck in the past.


Let all of these people 'transcend' their minds into computers, I will go the other way, and go immanently into my body.

"Behind thy thoughts and feelings, my brother, there is a mighty lord, an unknown sage—it is called Self; it dwelleth in thy body, it is thy body." -- Nietzsche

http://www.philosophy-index.com/nietzsche/thus-spake-zarathu...

Or Deleuze summarizing him: As Nietzsche would say, we stand amazed before consciousness but the truly suprising thing is the body.

To feel a raindrop on the skin, to look out on a surprising sunrise, to lie on one's back in the grass at dusk while bats fly overhead. I would not want to live 1000 years without this. I'd rather the lights go out and have it all end and gone forever than live only in a simulation, to no longer feel the randomness, the intricacy, the depth, the richness of what is happening now.

You're welcome to all the virtual you want. But no thanks.


Your preference is irrelevant. Many people will choose to be uploaded, and those people will inherit the Earth. As the uploads will be able to think, work, and multiply much faster than normal humans (among other possible advantages.)

Hanson imagines that fleshy humans will be left behind by the future, perhaps continuing to exist by investing in the em economy and living off the interest, but otherwise not participating in the system at all.

I don't know why you think an upload's experience would be worse than a fleshy human's experience. With such advanced technology in the far future, virtual reality and simulations will probably be indistinguishable from the real world. They will feel rain drops and see sunrises. But also they will probably be able to experience so much more. As virtual worlds do not have all the limitations the physical one does. They won't have to age or experience pain or discomfort, sickness, etc. They can walk on distant planets or swim in deep oceans, or fly through the air effortlessly, and without mechanical aid.


Cosplay (role-playing) of intelligence (acute hipsterism) at its best.


He's not an economist, he's a futurist, and not a particularly interesting one.

For an actual economist ahead of his time, I recommend Steve Keen.


Hanson is a professor of economics at GMU http://economics.gmu.edu/people/rhanson


For folks who may not be familiar with future studies. It's a genuine academic discipline, when we say that Hanson is a futurist, it's not an insult.

https://en.wikipedia.org/wiki/Futures_studies


He may be an economist by training, he may teach in an economics department, but as described, his course would fit right in at any future studies program. The article certainly makes him sound more futurist than economist.


http://www.overcomingbias.com/2011/09/on-futurism.html

When the media reports on the future, reporters pretty much only ever quote these sort of futurists, who have hijacked the future to support their side of certain current disputes. Truth be told, folks who analyze the future but don’t frame their predictions or advice in terms of standard ideological categories are largely ignored, because few folks actually care much about the future except as a place to tell morality tales about who today is naughty vs. nice.


Upvoted. Keen did get a few falsifiable predictions wrong, but his work on bank endogenous money creation, debt instability, and rate based economic modeling I think merit much more serious research in mainstream economics, along with the MMT school. Good of you to mention.


You can't hope to "scan" a human brain and make a virtual clone of a human being or any being, because life and conscience is much more than a brain. So that future he describes , at last functioning the way he thinks is impossible. Until we understand 100% of what we're made of, no one can even think of anything like this.


JETSON, Bay Area, the future (H Minus) — Scientists, or self-taught philosophers who’ve heard of science anyway, predict a dazzling future for humanity — in which our computer-augmented future selves, in a world of endless plenty, keep being shits to each other.

An advanced computer-managed economy will do all the jobs, building our homes, 3D-printing our iPhones and nano-assembling our food. Wanting for nothing, people will at last be free to assert their essential humanity and complain bitterly about everything.

Genetic engineering will give us new bodies and an indefinite lifespan. Whole-brain emulation will let those who would have died go on as personalities living in computers. In the future, fuckwits will be with you forever.

We will live in fabulously diverse harmony and break the cycle of every new technology first being used to murder each other. This will include those thoroughly deserving of murder, particularly the endless identical instances of irritating hippie emulations blathering about cosmic oneness.

Cryogenically-preserved humans from the twenty-first century will have their frozen brains read by nanobots so their personalities can be run as programs. To avoid future shock, they will be put to work in a computer-simulated office job, and be reset each morning. For comfort and familiarity, each person-emulation will be given a helpful companion program, called “Clippy.”

A benevolent artificial superintelligence will run the world, for the good of all humanity. To maximise utilitarian value across the quantum wave function of the universe, “Roko’s femilisk” will regrettably have to punish emulations of those who complained that they found Tumblr social justice warriors’ intolerance of intolerance “triggering.” Please donate to help achieve this.

Given the opportunity for a world of unlimited creative freedom and enjoyment, we can be certain that humanity will not rest until it has turned this infinitely bountiful paradise into something even pissier than modern-day capitalism.

The Singularity will elevate the human condition to nasty, brutish and long. An emulation of Thomas Hobbes will, however, buy that for a bitcoin.


No.


His book is not available on Kindle, so no


It used to be (see https://www.amazon.com/Age-Em-Work-Robots-Earth-ebook/dp/B01... and http://ageofem.com/). I don't know why it was pulled or who was responsible.


Would you expect an author to have anything to do with that decision?


Amazon pulled it to pressure Oxford U. Press to give more $.


If people can be convinced that their simulated virtual self is actually them, overpopulation will be solved by mass suicide. Sorry if that's too dark for you guys, it just has to be said. All these "live inside a computer people" are just a modern Jim Jones cult.


A self-backup from pattern data isn't any good for soothing one's metaphysical fears, unless (like most of the people on Star Trek) you don't think that the disintegration of all the molecules in one's body is the disintegration of oneself. A backup self is perfectly adequate if you want certain things to be taken care of despite one's death.

I believe that if I die I'm dead, backup or no, but I'd like to have a backup anyway so it could continue to pursue the research I found interesting while alive and could continue to take care of my family just as I would. The dead-me won't care either way but present-me would like to make contingency plans for the future.


A copy of you is you. Literally the atoms that make up 'you' are replaced all the time, by biology. And the atoms themselves aren't real. Like some models of physics suggest that physical things are constantly being copied and deleted already, like how objects in a cellular automaton universe are "destroyed" every time they move.

But how the universe actually works is irrelevant even. Other people can have a different definition of 'death'. If the information in my brain is preserved then that's all that matters to me. If I had to get my neurons replaced, one at a time, with 'artificial neurons' that would contain all the same information, then i would do it. Surely you would to. And you wouldn't even notice any change I bet. It wouldn't change anything observable at all. So how can it possibly matter?


Whether or not simulations are actually the people they're copies of, or people at all, is irrelevant to the question Hanson is looking at, which is "What might we expect ems to do?".


Far ahead of his time? No. This overwhelming optimism for technology is a very mainstream attitude you see among millennials and a lot of HN commenters.


Yet another person who has read too many of the Culture Series books by Iain M. Banks and thinks we are close to achieving a lot described within those books.

I do think it is achievable, but even the books acknowledge it takes thousands of years to get there.


Hanson has actually disparaged science fiction in general for purposes of futurism. I don't think he's been influenced by it much.


I don't think the books do "acknowledge" that? I mean in particular there's that nice short story where the Culture visits Earth. So the books are set in our present and not the future.


"Consider Phlebas" takes place in 1331CE whereas "Surface Detail" takes place between 2700 and 3000CE, with the other books in the "series" taking place at different points within that range.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: