Hacker News new | past | comments | ask | show | jobs | submit login
The science of Westworld (plan99.net)
242 points by mike_hearn on Jan 7, 2017 | hide | past | favorite | 142 comments



I really enjoyed westworld and found the coverage of ai amazingly not stupid as opposed to most other sci-fi shows.

However the biggest idea in the show was the relationship between god (Anthony Hopkins), self (the robots) and consciousness as an extension of inner conversation.

I had to look up the idea:

https://en.m.wikipedia.org/wiki/Bicameralism_(psychology)


I really like this idea. However, i've had lots of conscious experiences with a quiet mind


Read more closely, the hypothesis is that bicameralism was an evolutionary stage in human consciousness that broke down at some point in antiquity. So it would entirely be expected that you don't experience it anymore :)


Bicameralism, if it was ever real, might still occur under some religious settings. When I was much younger, still religious, and hadn't heard of the bicameral mind, I noticed that in a certain spiritual frame of mind my internal monologue became more of an internal dialogue, where I could pose questions and perceive a subvocalized answer within my thought stream as coming from "elsewhere". I'm not sure how to reproduce the effect, but it wasn't necessarily useful as I couldn't get any information I didn't already know.


> I noticed that in a certain spiritual frame of mind my internal monologue became more of an internal dialogue, where I could pose questions and perceive a subvocalized answer within my thought stream as coming from "elsewhere".

That was almost certainly one of your spiritual guides talking to you. Don't give that treasure up, you don't need to be "religious" to continue it. You just need to accept your spiritual dimension (that we all have), which is usually much easier for us when we're kids. Once we grow up, we develop our ego further, which acts like a barrier between ourselves and our spiritual dimension (i.e our guides). Btw, a substance like LSD or psilocybin can break down the ego temporarily and thus open you up further to experience your (natural) "link" to the "other side". And this is the reason why a lot of people report experiences of spiritual nature when they consume such substances (in the correct set and settings). Daily meditation practice is also a very good idea to keep it going.


> it wasn't necessarily useful as I couldn't get any information I didn't already know

Don't forget to read to the end...this isn't a 'spiritual' thing.


That would imply that everybody on earth "evolved" away from bicameralism at the same time.


Indeed, and that's one reason why the theory is considered discredited.

Still, it lives on. Not many incorrect ideas do. Perhaps it's because it is a more concise and elegant answer to several unresolved questions (why did self-awareness arise? why did religions change at that particular time?) in ways we don't have any other good answers for.

It's a brilliant idea, that sadly happens to be wrong.


> why did religions change at that particular time?

Wait, what? Sounds cool and I'd love a source on that.


The Wikipedia article summarizes it,

https://en.wikipedia.org/wiki/Bicameralism_(psychology)#The_...

Basically, before 3,000 years or so, it was common for people to hear the gods talk. Somehow, that changed and it became far more rare, with only a few (prophets and such) claiming to actually hear the gods' voices.

The bicameral theory is that there are still such people today, schizophrenics, and it is one half of their brain talking to the other - but they perceive it as an external agency. And that somehow this was the common state of humanity in the past, but changed. We became self-aware - we recognized the internal voice as part of ourselves.

This does actually fit many historical facts well. Still, it is likely wrong for other reasons.


Thanks. That's fascinating.


Not necessarily. It's conceivable that a cultural or linguistic change swept quickly through the population, but it wouldn't have to happen instantaneously, and might even continue in some way (see my other comment).


Perhaps, but it wouldn't explain isolated human populations, of which we have several in the relevant time frames.


Yeah I don't think the idea is taken seriously for humans. But it's an interesting way to program a robot.


>and consciousness as an extension of inner conversation.

To me it seemed like they barely touched on this. I had high hopes for the show in that we'd get some interesting perspectives on what a world with AI might be like, the moral and ethical issues associated with sentient-seeming machines, etc. Some kind of "Humans"/"Äkta människor" meets "Black Mirror". But that never materialized. To make matters worse the slipshod storytelling and lackluster character development became a distraction such that it made it difficult to take the AI aspects all that seriously.


I think that was one of the successes of the series. There are such good philosophical questions introduced, but the potential to address them in a ham-fisted, facile, middle-school-smart way is so great they made the right choice in raising, but not addressing them.


What aspects of the storytelling did you find slipshod? I felt it was quite compelling and well paced.


> and consciousness as an extension of inner conversation.

>> To me it seemed like they barely touched on this.

You may want to finish all of the first season then. It is the major theme.


> You may want to finish all of the first season then. It is the major theme.

I did, and your point nails it. It felt like something they tacked on to the last episode or two as opposed to being woven throughout the entire season.


>I had high hopes for the show in that we'd get some interesting perspectives on what a world with AI might be like, the moral and ethical issues associated with sentient-seeming machines

Was the eponymous theme park not enough?


The You Are Not So Smart podcast touched on this recently: https://youarenotsosmart.com/2016/12/02/yanss-090-questionin...


Robert Sawyer's WWW trilogy Incorporated bicameralism into the trilogy's AI development.


>coverage of ai amazingly not stupid as opposed to most other sci-fi shows

If they were capable of that level of AI they would have reached exponential growth shortly - to suggest someone would use it to build theme parks is downright retarded.

Like you said - AI is a plot device in that show, just another form of "magic" that lets them ramble on about consciousness while pretending to be sciency.


I think the best coverage of AI in visual fiction so far is Person of Interest. Interestingly, the show was made by the same team writing Westworld. PoI starts slowly and pretends to be a cop procedural with a little bit of sci-fi sprinkled on top, but quickly evolves into treatise on surveillance state and artificial intelligence.

Fun fact: PoI basically predicted Snowden and his revelations - the episode with NSA whistleblower aired before Snowden did his thing.


I got a bit bored with PoI at first but I've heard enough good things that I really should dive back into it.


PoI really suffered from being on network television with their really long seasons. Had it been done with only 10 episodes a season and all the filler removed it would have been much stronger.


Since it's a TV series, there are a lot of generic filler episodes that are just like Law & Order whodunnits + AI. But the high points, which eventually take center stage in latter seasons, are indeed the best depiction of AI and the issues surrounding it that I've ever seen on a screen.


Someone should compile (if possible) a list of essential episodes that takes out the fillers. The equivalent X-Files list is great.

I tried PoI myself and couldn't get through it, because it was just so dull -- run-of-the-mill low budget procedural with that glossy network-TV feeling of Hollywood reality to it (e.g., everyone in the show looks like a model, except eggheads or villains, where they're allowed to cast someone unattractive).

Several people have told me it gets much better, but I just don't want to have to wade through all the mediocre episodes.


It's been done: http://www.ign.com/articles/2016/04/22/person-of-interest-th...

No idea how good the list is or how well it works.


> to suggest someone would use it to build theme parks is downright retarded

What's the current primary use for machine learning? Determining consumer behavior? Guessing at which ads to display?


I did find it strange at first too, certainly there would be better use for such a level of AI than building Disneyland 2.0. But it's actually explained over the course of the season.

The character played by Hopkins tried his hardest to keep the technology within the park. That seems pretty odd at first but his reasons to do so are stated explicitly in the final episode.

Many people try to get the technology out but they all seem to have failed so far.


Why would a robot inherently want to make a better robot, or change itself?


If it has any general goal at all that it takes seriously, it will be better achieved with self-improvement.


How much wattage needs to be spent increasing its intelligence? How many watts will that save in achieving its original goals?


Who said anything about a robot wanting to do anything ? The creator would obviously - he would certainly not build a theme park with his robots. Imagine for one second that you could build such level of AI and robots - do you really think you would need to build a theme park to finance it - WTF ?!? It's like when the AI used human brains as electricity generators in matrix - but at least that movie had cool music and fight scenes.


They were trying to produce consciousness, not raw intelligence. The park was for the robots at first. After Arnold killed himself, Ford opened the park to humans to get money in a pinch without giving up control of the bots.


You don't get it - even if you're some dreamer scientist genius who's trying to build whatever - if they actually got to that level of intelligence the tech would have been "acquired" by military/other companies/whoever and very quickly used for self improvement to get better results. If you could build that level of AI you are absolutely not building a open access theme park to fund it - how's that even remotely plausible to you ?


>if they actually got to that level of intelligence the tech would have been "acquired" by military/other companies/whoever

I don't doubt that that would be a very real consideration, a pressure out there in the world that anyone developing AI would have to contend with.

I think the problem is with treating this like an inevitability. You are always going to deal with the idiosyncracies of the world: personalities, motivations, contingent world circumstances.

And beyond that, I think we have to keep in mind that this is, after all, fiction. The writers may arbitrarily structure their world in a manner that puts a spotlight on things they feel are interesting. And, within the limits of that spotlight, they may consider things like artificial intelligence in a way that is sincere to the subset of futurist considerations they are interested in investigating. And perhaps that involves suppressing plausible forces we would expect to see, such as military and corporate interest in advanced AI tech.


> plausible

Can't imagine the character played by A. Hopkins would even pick up a phone call from military.


With that kind of tech, they wouldn't allow him the option of choosing. It'd be classified immediately as a national security issue and the government would assert ownership via eminent domain. We do not live in a world where a lone genius would be allowed to do what happens in that show.


Aside from literally nuclear tech, when has a government confiscated tech in this way? What you're describing is completely unprecedented.


It has done so over 5000 times so far just under the Invention Secrecy Act of 1951.

https://en.wikipedia.org/wiki/Invention_Secrecy_Act


Wow, I didn't realize that had continued past the second world war.


Even nuclear tech was specially developed with government funding. P and GGP are arguing against the fantasy show because it doesn't behaving according to their own fantasy.

edit: In fact half the time tech moves in the opposite direction. Going onto the internet to argue that potentially civilization-changing technology won't be used largely for entertainment because it'll end up exclusively owned by government shows a profound lack of awareness.


I mean if you start making a nuke the government will come and physically take it away from you :)


AI of that level would be unprecedented and far more dangerous than nukes; to think the government wouldn't interfere would be unrealistic.


I'm sure there's a pitch to be made, but consciousness doesn't seem to me to be a killer app for the military. In fact Dolores repeatedly refuses to kill people even when it's in her best interest. Only one person has died before the last episode, so how could you convince anyone that it's more dangerous than nukes? The other tech in the show seems pretty advanced so I'm sure the military is doing just fine for themselves. And the people are there because outside life is boring, so there's no cold or hot wars where they need an advantage ASAP.


We're discussing what would be realistic in our world, precisely because what's happening in the show is implausible. How Dolores behaves wouldn't be relevant to this conversation.


The show had cool music and fight scenes as well and the AI is orders of magnitude better represented than in the matrix, have you even seen the show?


I love "stupid" questions like this.

We can say that the vast majority of living beings on Earth do not seem to seek any form of radical self improvement beyond ordinary developmental learning and mastery of survival skills. There is no intrinsic reason that there must be an impulse to exceed oneself. Why would AI be different?


Most of the smartest humans spend a significant part of their lives in learning and mastery of skills that go beyond survival.


And not all hosts in the show are conscious. I would say that is a great metaphor for our current society, where you could say that the vast majority of humans aren't really conscious. They might eat and have sex and fight to survive, just like a host does. But they don't really comprehend what it means to be alive.

AI's, like humans, need a purpose. The best way of achieving that purpose is by self improvement so any AI which does not self improve to some extent will be replaced by another that will. Just like humans who don't care to self improve eventually don't reproduce enough to spread their genes.



Why indeed would an AIs motivations even be comprehensible to us.


Because an AI would be created by humans, and would therefore be modeled after human intelligence.

An alien AI, on the other hand, would most likely be incomprehensible to us, at least until we understood the aliens that created it.


Because we have neural networks in our heads. Sure, very differently structured, semantically and physically.


>while pretending to be sciency.

It's almost as though it's science fiction...


Well I guess there are different kinds of science fiction - sci-fi that's obviously fiction and doesn't try to pretend to be realistic - Alien, Star Wars, etc. - I'm fine with this - it's just the author taking you in to his own fantasy world - just like Game of Thrones doesn't try to describe how dragons get to be aerodynamic or explain the white walkers superior strength by using dumbed down science.

Then there's Sci-Fi that's actually beleivable - like The Martian - I'm OK with this as well.

But then there's this pseudo-sciency trying to look serious while actually being BS part like Interstellar, The Arrival, etc. The main point of science there is not to be a fictional setting but trying to give a feeling of realism trough sci/techbabble. I don't even mind cheap sci-fi drama movies, for example I liked The Fountain, but when they try to pretend they are realistic that's what breaks the immersion for me completely because I start to critically evaluate the plot and it just falls apart.


> trying to look serious while actually being BS

I think you missed the point entirely of the films you mentioned


That's the point of SF in those movies. Remove SF elements and you're left with cheap dramas and cheesy themes (love beats everything yaay) - so they wrapped them in to SF elements and tried to make them look serious and deep with pseudoscience - which just annoys me and triggers me when people buy in to it :)


:)


> This sort of “escape hack” isn’t possible for human players because you have to do too many precise actions too quickly, so in the video it’s performed by a separate computer wired up to the gameport.

Well, someone did the credits warp manually, at least: https://www.youtube.com/watch?v=HxFh1CJOrTU

The video in the article shows basically shellcode injection, but that's not timing-sensitive, it'd just take longer for a human. And, as seen above, similar things are possible for humans, if just less convenient to do so.


> The video in the article shows basically shellcode injection, but that's not timing-sensitive, it'd just take longer for a human. And, as seen above, similar things are possible for humans, if just less convenient to do so.

Wow that just gave me an idea for a story. Humans found out they are mere pawns in a simulated reality, discovered a 'hack' to alter reality, but the 'hack' would take hundreds of years to complete. So generations of humans toiled over at completing the 'hack', passing the baton through each generation.

There could be so many possible storylines from this - corruption/destruction of reality, dictator wanting to change the past, or a cult hell-bent on changing reality, or sorcerers practicing 'magic', or a lone protagonist who's on the verge of completing that hack after hundreds of years, but suffer from ethical conflict and existential crisis.


Check out Off to be a Wizard. You will enjoy it.


Sweet! Thanks I'll check it out.


He also injected flappy bird into SMW manually. https://www.youtube.com/watch?v=hB6eY73sLV0


"Lily is a swan. Lily is white. Bernhard is green. Greg is a swan." -> "What color is Greg? Answer: white"

At the risk of sounding somewhat stupid, but shouldn't this contain "Swans are white" for it to be a correct answer?


If we're limiting ourselves to deductive reasoning, then yes – the facts as stated do not give enough information to deduce that Greg must be white.

If instead we use abductive inference, we might seek the simplest and most likely explanation given our universe of observations. Sherlock Holmes was a big fan of abduction!

Much of real-world reasoning is abductive to a greater or lesser extent. There is a well-known joke about some motley band of engineers, logicians, mathematicians, statisticians, etc etc catching a train through the Highlands. They see a black sheep, the engineer says "look, all sheep in Scotland are black!", the statistician says "no, you can't say that – just that MOST sheep in Scotland are black", another says "no, we can only say that at least ONE sheep is black", another says "no, it's only black on at least one side", then the one you're stuck next to at the party says "you're all wrong, we can only say that at least one sheep in Scotland is black on at least one side at least some of the time". The last statement is fully deductive; the rest of them are abductive, and more-or-less useful.


This is why I think the ability to ask good questions is a better indication of understanding and intelligence than the ability to generate answers.

As a gauge for how far we are from AI you can consider what sort of modeling capacity is required until an AI can ask, when presented with such a sequence: "What country is the swan from?" or, even more impressively: "Do you know where this took place and what country the swan's parents were from?" For the first question it would then abduce a color. Same for the second but perhaps it could include probabilities based on estimated number of each color and the genetics of swan color.

This post is a rotation meant to provide a better sense of scale for the problem at hand.


> This is why I think the ability to ask good questions is a better indication of understanding and intelligence than the ability to generate answers.

Certainly! Synthesis rather than reformatting (or, more commonly, regurgitation). Analysis and abduction are more than just "put it in your own words". More useful too.

There is something of a rush on at the moment to generate chat-bots to replace FAQs. Every Slack/Fleep/Blern/Crank channel appears to have five or six memoisation bots. Seems to be largely a solved problem!

When we can start having bots that can be sensibly interrogated for a summary (or even a "hey, you've been away for several hours: here's the key points"), we can finally abandon the chatrooms and let the generative bots flood them with abductive content, and the precis bots can then ping you every couple of weeks when something important comes up.


I would hypothesize abductive reasoning works better for collectives which accept mistakes as one means of learning. For today's AI, it might be better to ask for a bit of context from your observers before making conclusions.

"Am I in the United States around the first part of the 21st century?"

"Yes."

"Oh, how unfortunate - now I have to ask another question or you may think I'm not sentient."


Correct answer should be "possibly white", but "insufficient data for meaningful answer" should also be right:) There's a high chance of being right with swans, not so much with humans.


I think you are right (metamath):

$( <MM> <PROOF_ASST> THEOREM=whiteswans LOC_AFTER=

* Assume it is provable that ( l e. S /\ l e. W ) implies for all l ( l e. S /\ l e. W ), and assume that g e. S . Then it is provable that if ( l e. S /\ l e. W ) then g e. W .

h1::whiteswans.1 |- ( ( l e. S /\ l e. W ) -> A. l ( l e. S -> l e. W ) )

h2::whiteswans.2 |- g e. S

3:1:bnj1361 |- ( ( l e. S /\ l e. W ) -> S C_ W )

5:3:sseld |- ( ( l e. S /\ l e. W ) -> ( g e. S -> g e. W ) )

qed:2,5:mpi |- ( ( l e. S /\ l e. W ) -> g e. W )

$= ( cv wcel wa bnj1361 sseld mpi ) DGZAHMCHIZBGZAHOCHFNACONDACEJKL $.

$d S l

$d W l

$)


That would make it a classic logic "puzzle". I think these Babi tests are meant to incorporate more fuzzy concepts. If you read this sentence to a 4-year-old, she would probably answer 'white', right?


I would hope that she would answer "white...?" -- eg., demonstrate the ability and willingness make a useful provisional inference, with the understanding that it is provisional and the curiosity to know more. That, it seems to me, would be the answer that is most useful and correct.

But you're probably right, the answer would be 'white', at least until a black swan comes along and utterly fucks with her worldview. Humans prefer certainties and binaries, and eschew uncertainties, probabilities, and multiplicities. So they employ all sorts of cognitive errors to avoid these things. This s a problem, because the universe rarely comes in binaries or delivers enough information for real certainty. I would hope that machine consciousness would avoid these errors, as I think they are the foundations of some of our nastier tendencies.


> Humans prefer certainties and binaries, and eschew uncertainties, probabilities, and multiplicities. So they employ all sorts of cognitive errors to avoid these things.

I wonder how general is that. I'd like to believe it's more of a mindset thing - I definitely saw people reasoning this way, but I also know some that handle uncertainty pretty well. I'd like to include myself in the second group - personally, I'm actually suspicious of anything that sounds binary in the real world - it means I'm being fed some artificial boundaries.


Yes, I am slightly horrified that AI is supposed to integrate this basic kind of logical flaw, which gives human societies so much trouble.

It would be OK to deduce that the expected answer is white or something like that (taking human unreasonableness into account).


I think no,

Lilly = Swan, Swan = White, Bernhard = Green, Greg = Swan.

Color of Swan or Greg = White


the correct answer, especially for any AI system should be -

it likely that it's white but there is no way to know for sure.

Australian black swans are black, but chicks are light grey:) Lilly could be chick while Greg could be adult Australian swan


Greg is white. Also, Greg is Lily.


Westworld reminded me a bit of Asimovs stories.

Coupling the whole robotics and AI thing. In Asimovs stories they first built the robots, which get smarter and smarter, but this is not how reality worked. Robots are rather specialized and most AI/automation we have today is about data, which is virtual.

They're all like, lets build a robot, then make it intelligent, but it should also move like a human, oh and why not replace their internals with life-like organs?

It's like they make a run through all science and engineering in about 30 years of development and pretend it's mostly the work of one master mind, which is ridicioulus. The whole "technical" side of Westworld is trash.

It's more a philosophical story than technical or psychological, with a few deus ex machina to get the ball rolling.


> Westworld reminded me a bit of Asimovs stories.

I suspect this isn't an accident, given (some of) the same writers/producers are developing the Foundation series for HBO. There are MANY Asimovian themes scattered throughout the series, and even a few things I'm reasonably certain are direct references. (Ex: "Someday".)

> It's more a philosophical story than technical or psychological

And that's why I enjoyed it. My favorite science fiction always fits that description, especially if the big philosophical question comes around for technical reasons. (Such as in "The Cold Equations.") That, and the first season was very much a complete story. They could stop here and I'd be happy with it.

The series had its flaws, but I'm very optimistic if this is the shape of TV scifi to come. Even if it's all adaptations or of a derivative form.


I don't know... I think it's a nice show and the story telling is really awesome and the philosophical questions and their answers are really interesting.

But the rest feels a bit... meh.

The charaters don't have any depth.

Bernard and Ford had the most, but the rest?

The Maeve story-line was utter crap and the people around it wer basically imbecills, Maeve included.

William had something going on for him and when everything got together I was blown away, but only for a moment because some of the story telling puzzles were solved and not because he is a good character, his development is simply implausible.

Teddy is just... empty?

Dolores was okay, but since the whole story was about her uncovering her past and with it her personality, it took the show till the end of the season till she got some depth.

The scifi aspect is also minimal and basically a huge plothole, it's more fantasy to me.


Spoilers if you haven't watched.

The tech I can deal with, because they don't even try to explain it, and there was a nice moment where Maeve goes bonkers when confronted by the fact that she doesn't really have free will.

But my biggest gripe was that park security was a joke. It has a kind of Star Wars stupidity - "There's a problem down on the planet that could be hostile? Let's send the captain, first officer and chief medical officer". There's a problem in the park, so they send the head of security alone who conveniently can't get a signal back to base. And then there are the personnel who are wearing armour so ineffectual that they all drop like flies with a single bullet. You'd think that they might design weapons that were biometrically (or at least RFID-tagged to be realistic) linked to real people so they couldn't be stolen.

It seems like the entire park is manned by about 100 people. And until the final twist was revealed, I did wonder how the hell any of the chronology actually made sense - as in the starting town scene was being reset so often that it would be an insane clean-up job every night. Not to mention that there were parallel storylines where characters that had dependent stories seemed to be apart during resets e.g. Dolores and young William, Teddy and old William were being aired at the same time.


Ahem, I meant Star Trek...!


I was waiting for the author of the article to mention Asimov, but he never did :( It's too bad because Asimovs Three Laws of Robotics

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

do answer many of OP's points about goal oriented behaviour.

I think that if the show spent time covering how the tech came to be, it would just dilute the philosophical concentration of the series, which in my opinion, was extremely stimulating and appropriately delivered for television's standards. No I do not believe that Mr. Ford developed sentient beings with the help of just one other person, but who's to say he didn't just fork some open source framework (in his backstory, of course) and spend some ridiculous amount of an inherited wealth to make it all come true?

I had my share of disappointment as the show progressed, but it definitely added something to the mix that I was not expecting.


> Here’s a harder one that tests basic logical induction: Lily is a swan. Lily is white. Bernhard is green. Greg is a swan. What color is Greg? Answer: white

That's not right is it? At least, not logical induction?

Supposing the second premise were 'Lily is female'; the answer to 'What gender is Greg' should obviously not be 'female'.


Inference includes several means.

Inductive reasoning (induction) guesses the general case from particular instances.

I presume that you are thinking about deductive reasoning (deduction), which guesses the particular instance from the general case.

Finally, abductive reasoning is usually the goal instead of simple induction, and technically what the neural networks do: compute the simplest general case that best explains particular instances they are trained with.

https://en.wikipedia.org/wiki/Inference


Is the quote missing the part where they say that Bernhard was not a swan?

Using their flawed logic, since Lily and Greg are both swans, then Bernhard must be a swan, so Greg could be white or green.

You're right about the answer not being correct, and not for the reason I gave. The test was a butchered version of the black swan problem. It shows that you can't generalize given prior examples. Even if you've seen a million swans, and they were all white, you haven't proven that all swans are white.

I haven't seen the show, so I don't know if they were intentionally going for that, or they screwed it up and honestly thought it was a proper example of inductive reasoning.


Well a strict logic question would be "All swans are white. Lily is a swan. What color is Lily?"

But induction is more like learning about the world by observing it, and making probable guesses. So it may be correct for a machine to observe that if one swan is white, the best guess for the color of another swan (in the absence of other info) is white.


Thanks, your first sentence is more like what I was expecting before reading the example.

I'm not familiar with that definition of induction, but I haven't studied ML at all.


If I'm honest with myself I have to admit that a general purpose human level, strong AI would shake my world view significantly.

For that reason, I find it difficult to have a completely unbiased opinion on our current state of progress.

But... while we seem to be making great progress, it seems like we're a long way off understanding how the human mind works.

AlphaGo was an amazing achievement, but I think it's unlikely that the human mind tackles Go in the same way.

It's obviously possible that there are multiple routes to a general human level intelligence. But I think it's still unclear if the way AI is currently being developed is one of them.


What I find puzzling is that many people seem to assume that AI can be generated using a specific uniform neurological structure, while the human brain is actually made of many different parts, some older, some newer, some more connected, some more isolated, some inhibiting others, some potentiating others, some mostly signalling with this neurotransmitter, some using that etc.


I would assume it might actually be easier to get there with a uniform neurological structure, as you don't have the "legacy" infrastructure left over from millions of years of evolution.

However, I think that the first AI humanity manages to build will be more or less a copy of a human mind and only later will we learn how to construct minds "from scratch". Akin to how a beginning programmer will often scrape together bits from various sources to build his/her first program and only later can make original work.


Question is, whether those older parts are best viewed as legacy infrastructure, or as ASICs - parts that are in their local optimization minimas for functions they perform, and better than a uniform architecture?


I mean, a lot of the "older" brain is control circuitry that keeps everything running and regulated without our conscious thought. Everything will have that -- at some level. It might just be actual control circuitry and not anything tied to the "brain" (e.g. a PSU in a computer), but the function would need to be performed, and if it's not part of the brain, the feedback we get (e.g. about stress or pleasure) might be lost?


I think that a lot is control circuitry, like the bits that regulate digestion or body temperature. I don't see any reason that for a superintelligence these could not be consciously controlled. The main reason we have so many unconscious processes and heuristics in our brains is to limit the total power consumption, as that was super important back in the days when food was scarce. If power consumption becomes less important, you could do more and better thinking.


It doesn't require a superintelligence. It's possible to gain a certain amount of control over various unconscious processes by means of mental-training.


The parts are best viewed in terms of what they do, which is known for a lot of them. Neuro-science is an established field:

https://en.wikipedia.org/wiki/List_of_regions_in_the_human_b...


> What I find puzzling is that many people seem to assume that AI can be generated using a specific uniform neurological structure

This is a basic result in computer science: universal Turing machines can simulate any other Turing machine. If you accept strong AI, then you already accept the likelihood that the brain is reproducible via a Turing machine.


Exactly. The brain is not homogeneous. It has different parts with different functions.


> I think it's unlikely that the human mind tackles Go in the same way.

This is very certainly true, which is what makes AlphaGo interesting to watch and study. The human mind, even one that has trained on Go for years on end, will still work with abstractions and ideas that do not relate to the game. AlphaGo and other computers lack this attribute, as any and all abstractions they may have learned relate entirely to the game.

Any ideas about the "human perception" of Go they may have gleaned from games that are included in the initial training dataset, I suspect have long been supplanted by novel notions gathered during the phase where the Neural Nets played against themselves. These phases are documented in the AlphaGo blog from Deepmind[1].

I suspect that we may reach "human level intelligence", but that this intelligence will not arise in the same way. That is to say, computers will at some point match us in most tests of intelligence, but the solutions they devise will be completely novel.

[1] https://blog.google/topics/machine-learning/alphago-machine-...


I think the physiological and the logistical aspect of Westworld is more interesting than the AI


Agreed, given the amount of damage the Hosts seem to receive each day it doesn't feel plausible that they could collect them all and do what seems to be a manual operation on each to return them to perfect condition overnight.


This is partially explained by the end of season spoiler: a lot of what you're watching isn't happening simultaneously. We see Maeve get reset a hell of a lot of times, while in the meantime Teddy is wandering all over the park with William, Dolores is somewhere and so on. There is no mention of actual dates.

My initial reasoning was that the park doesn't get continually reset every day, nor does it necessarily happen over night. It gets reset when a storyline finishes or a catastrophe happens in one location, e.g. the shootout in the starting town. It would make sense for guests to be given allocated windows for entry (note how the park is clearly not teeming with players, we meet perhaps 10 extras over the course of the entire series). Then that 'game' gets run, people play through the stories and then a new cohort begins. This might take, for example, a week.

We see the transition from the church in the Maze town being totally covered in sand, to being unearthed again. That clearly isn't an overnight (or even a week's) job.

Recall William had to get permission to launch an incendiary attack at the prison. It's possible that the control centre would deny the use of dynamite to a player if it was used in a location that would be frequently re-used.


That's one of the aspects of Westworld that I think you have to just more or less accept and move on. The nightly reset is implausible at all sorts of levels: economically, scale and effort (even to buildings burning down), and just logistically (does the park just shut down for a few hours every night). And we're shown that, as you say, this all seems to be a very manual process for the most part.

Something else you pretty much just have to accept is that through all sorts of mayhem, the humans stay safe (well, until the end at least). There's some rather inconsistent hand waves around guns and bullets but one has to believe there are still ample opportunities for serious injury in some of the scenes we see.


> the humans stay safe

[SPOILERS]Dolores kills a human outright by aiming at their head mid-season. Then it isn't mentioned again.[/SPOILERS]

I always assumed something in the suit lining made the guns faux-fire, and the suit responded by exploding a pocket of air or something. But then you can see The Guy In The Black Suit (I've forgotten his name) load his revolver by manually inserting bullets, so I have no idea how that'd work. It's probably one of the only things that bothered me about the series.


It's not a human, it's a host that Dolores kills. The significance is that she doesn't have "weapon rights," and so is programmed to be unable to shoot a gun.


According to Jonathan Nolan it's the bullets: https://editorial.rottentomatoes.com/article/11-rules-of-wes...


Which is a reasonable handwave until you start asking about the physical damage that the bullets do to [EDIT] hosts and ignore the mayhem involving explosives, fires, etc.

It doesn't especially bother me though. We know that TV and movies generally have a convention that trauma that would put someone in the ICU in the real world is brushed off as a flesh wound. And I'll accept technobabble about the bullets. I'm also happy to accept that maybe Westworld takes place in a culture where theme park risks on par with base jumping are considered fine and proper.


In the movie, and I'd assume it's sourced from the book, the guns have temperature sensors and won't shoot at something that is warm blooded. But... that probably doesn't hold up with the "they're basically humans" anatomy of the hosts.


Actually, the economics of Westworld aren't quite as implausible as they might sound:

http://www.cnbc.com/2016/12/27/what-it-would-actually-cost-t...


It's fun to think about how it is possible :)

Edit: popular holiday destinations like Mexico are reasonably dangerous, maybe Westworld is on the other side of the border ;)


The supplementary materials released on-line include, AFAIR, Westworld's Terms of Service - which do touch on possibility of injuries and death during the visit in the park.


"The nightly reset is implausible at all sorts of levels"

Especially since a lot of hosts "work" at night. :)


Yep and they're presumably with guests. And it's not like all the guests head off to the Westworld Hilton after dinner every evening so that the park can do its daily cleanup.


I think it's more likely that they would cycle them out over time to reset.

Yes, the hosts are busy, but it's definitely possible they could be recalled to a location for pickup (or whatever) when they're free.


Just create more hosts and have them do it.


Alternatively, create multiple physical copies of each host.


They actually explained mid-season that building a host was a huge upfront cost. That's why they prefer to reset and repair them than bricking them.


Why not just very good VR? Cannot believe it'd be more difficult than creating robots that can't be told from real humans.


I'm doubtful something like very good VR or even good VR can even exist. You'd need to simulate physical forces interacting on the human body and movement to make that happen.

Unless we achieve the ability to essentially run a MITM attack on the brain, to intercept commands from the brain and to provide it with sensory input, VR won't come close to being something genuinely deserving the name virtual reality.


One can at least imagine things like body suits with very fine-grained force feedback, etc. But that still leaves you with a whole range of other senses and physical feedback mechanisms absent.

There are scenarios where you are more constrained in real life (e.g. sitting in a vehicle of some sort) where g-forces aside, you can probably get pretty close with vision, sound, and some fairly basic force feedback will be able to get you to a fairly decent simulation. But I agree that anything involving running around and physically interacting with a 3D world is a lot more challenging.


One can imagine that. I can also imagine that it would involve a lot of effort to create it, that it would be a lot of effort to use and that it would still be very limiting for all the reasons you mention.

Currently you can't even really walk around in a room without being incredibly restricted in terms of room layout, furniture and let's not forget you the headset with cables attached.

Last but not least even this hypothetical virtual reality is still just that: virtual. Simply the knowledge that something is merely virtual will always be somewhat of a disappointment, the same way a copy, no matter how perfect, is not quite as appealing as the original.

The concept of Westworld has an authenticity to it, that virtual reality can never hope to reach. As humans we are just weird that way.


That was my thought when I started watching the show, but as you watch more episodes you realize why the creator didn't go that route. I don't want to spoil it for anybody that isn't a regular viewer.


Personally I found the recent season of Humans a lot more interesting than Westworld, which (to my taste) tried too hard to be philosophical, while Humans managed to tell a similar philosophy more subtly without being on-the-nose.

But we're all critics, so YMMV.


I tried watching season 1 of Humans but was jaded by first watching Akta Manniskor (Real Humans), the Swedish series it is based upon.

It seems that sometimes a lower production budget yields a better story. Forced to rely instead on the viewer's imagination and subject interest. Especially if it's done with aplomb and intelligence.

I found the dialogue in Humans to be too "explanatory." Haven't seen a full episode of Westworld yet, but imagine the same is true. Big budgets tend to to that.


I haven't seen the US Humans but did enjoy the original and found Westworld to be even better in the subtleness category.

It's probably due in part to being an original and even higher production values, so you get actual good writers. Maybe also a higher-brow target audience.


Is there a US version of Humans? It's a recent show and on BBC, I wouldn't think a US version existed yet.


The series is produced jointly by AMC in the United States, and Channel 4 and Kudos in Britain.


I totally agree with you in a philosophical sense, but in a which show I'd rather watch sense I pick westworld.

Humans seems smarter about a similar subject matter, but westworld is just so much fun.

Fortunately there is no conflict and I can watch both!


I loved westworld. I think it shows imagination and elevates TV. But it starts at a point where the AI is already real. They robots are able to parse reality in all 5 senses in real time and respond. The are physically sophisticated, are supposed to be bio-chemical and match humans in dexterity and motion.

Season 1 focuses on the next step which is consciousness and choice, and the 2 critical theories to explain the emergence of these in the show is bicameralism and memory. It doesn't dwell too much on the how and focuses more on the consequences of it, which is a fascinating journey.


They are apparently not having the same kind of inner thoughts that we have and most of what they do is scripted. I couldn't give evidence without some spoilers.


My answers to those example bAbI questions don't match to "correct" answers. I guess I'm not human...


Most of the plot of Westworld revolved around the complete absence of any rigorous release strategy - or at least Ford's privilege in thwarting it. That was necessary to expose the development arc of the hosts.

The writers also appear to have used a variation on recursive plot, which is nice to see.


>Put another way, there’s a risk of AIs learning to achieve their assigned task better by preventing humans from shutting them down.

Thought experiment/short story that goes into this in depth: https://gist.github.com/deanmarano/142df7a8a824ab05fc777d8e0...

The crux of the story hinges on the magical spontaneous development of general intelligence, so it's pretty unconvincing as a specific plausible scenario IMO. But the general idea, that an AI may take unethical/unprecedented actions to maximize a harmless goal, is a good one.


I'm a huge westworld fan so I made a fan site: http://westworld2.com


Lily is a swan. Lily is white. Bernhard is green. Greg is a swan.

What color is Greg? Answer: white

How the hell is that logical? Greg may be white, he may be black or any other color. You could guess Greg is likely white. The correct answer is "can't tell". But I'd expect an intelligent responder to reply with questions and complaints of unanswerablity like I have done here.


AI that insists on hard logic even in inappropriate situations is a staple of science fiction, but there's no reason we'd build real machines that way.

The cited example is taken from the bAbI papers, and is a case of inductive reasoning:

https://en.wikipedia.org/wiki/Inductive_reasoning

Inductive reasoning (as opposed to deductive reasoning or abductive reasoning) is reasoning in which the premises are viewed as supplying strong evidence for the truth of the conclusion. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument is probable, based upon the evidence given.

In reality this is often the best you can do, as all you have to work on in the end is the input from your own sensors.


Westworld (2016) is an amazing show. In my opinion it is more about how real it feels, than the thematic itself. I'm convinced that same approach would work for other genres, not just sci-fi. It is a kind of hyper-realist cinema, where the plot and acting is so superb that feels credible and breathtaking.


As far as I can see, most of the points of this article were brought up in James P. Hogan's 1979 The Two Faces of Tomorrow.


Meh, I think that human intelligence is vastly overrated. We've been moving the goalposts for centuries to safeguard our place not only at the top of the intellectual food chain, but also to maintain the idea that we are on an altogether different chain.

We've been doing this with regard to various animals since forever. Every time a chimp learns sign language, an elephant cares about its mother, or a dolphin has sex for fun, everyone loses their minds falling all over themselves to "prove" that we are qualitatively different. That there's something else going on with us that is special.

I'm not passing judgment here. I do it too. It's extremely convenient for me to do so. If you start thinking of intelligence as a spectrum with some species closer to one end of it than others, it gets a lot harder to justify most of what we do to animals. And there's a dark place I don't want to go that suggests that some people are far enough down the spectrum that maybe you could justify doing bad things to them.

It get really messy and ugly in both directions when you think about intelligence as a spectrum. At what point should an animal be considered smart enough to merit "human" rights? At what point should a human be considered so dumb that it doesn't?

We, as a society, are not ready to have that conversation. We lack the moral fortitude to do so, which is why I happily participate in this artificial segregation.

But we are going to be forced into dealing with it much sooner than we are ready to. I fully expect that within my lifetime, there will be Bladerunner scenarios with lifelike robots who are practically indistinguishable from actual people.

We live in a bubble here, where all the women are beautiful, all the men are above average, and all the children are FBI agents.

Human memory is far more corrupt and fallacious than we want to think it is. Weird social interactions with slightly (or very) dysfunctional people are far more common than we tend to think they are. Spend a day on the subway in NYC. Spend a day in my hometown in Texas (population: 498).

Many of these people could easily be simulated with a high degree of believability. The real hypocrisy here is that no one wants to believe that you, as an individual, could be simulated with believability. I'll go on the record and say that it would be trivial to simulate me. I'm not that special.

The problem we have with AI tests is that we are testing against the ability for an AI to be anyone. We're checking to see if an AI could be as good at impersonating absolutely anyone as one of the top .00001% of human character actors.

We aren't checking the lower bounds. Because that's extremely uncomfortable for us. We're maintaining goals and standards that are designed to make the tests fail.

Again, we are doing it for good reasons. We haven't yet solved the problem of how to treat each other when we know that we're only dealing with humans. We aren't ready to talk about bringing other entities into our world yet.

Bladerunner was prescient; Westworld is near future. We need to get our shit together because these issues are going to come up far sooner than we expect. And when we're talking about an entity that speaks to us in our own language, with our own idioms, with our own concepts of feelings and emotions--it's going to be a lot harder to maintain the pretense that we are somehow qualitatively different.

On the other hand, this could be really convenient for us. A moment of solidarity, if you will. We could create believable robot characters that we unite against and focus all our hate, violence, racism, and abuse towards. Maybe we all get along better after that.

But what does that say about us? And have we really solved our problems? I think that's the question Westworld is asking, similarly to the question some open world games ask, like EvE Online: in a universe where everything is permissible, who are you?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: