Hacker News new | past | comments | ask | show | jobs | submit login
A Senseless Conversation (sites.google.com)
309 points by nyellin on Feb 24, 2012 | hide | past | favorite | 98 comments



Nice short story. For a full sci-fi book of this genre, check out Permutation City.

For another fun question: "What time is it?"

And lastly a quote: "The effort of using machines to mimic the human mind has always struck me as rather silly: I'd rather use them to mimic something better." ~E.W. Dijkstra


better yet, also Dijkstra, "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."


We are complex machines, aren't we?


For a full sci-fi book of this genre, check out Permutation City.

Then go read everything else by Greg Egan!


This. And don't bother trying to find a picture of him online.


I disagree. Read everything else by him, then Permutation City. I don't think he'll be able to top it. He hasn't yet.


PC is still my favorite, but he has written a number of really good novels, and his short fiction is also fantastic. Many people seem to like Diaspora even more -- may just be a matter of taste. My love of PC is probably more due to the subject matter (CA's as the underpinning of physics) than due to the book itself being better as literature than his other writing. Like most SF writers, I have a hard time actually remembering his characters -- they are there primarily to present his themes.

If you want a novelist who writes SF but actually creates 3D, memorable characters, try Walter Jon Williams (Aristoi, Metropolitan, Implied Spaces... but avoid his serials IMSHO)


I loved Permutation City, but I actually preferred Diaspora. And I think his best novel is Quarantine.


Too late! I already tried, but was bitterly thwarted.


Nevertheless, human uploading could be tremendously useful. http://en.wikipedia.org/wiki/Mind_uploading


Heh. I thought about that, but I also thought, why not just tranquilize a human who believes he is about to be uploaded, then euthanize him? He'd never know the difference, so who does it hurt? And it'd be a lot cheaper. I suppose that you'd somehow have to convince people they were going to be uploaded. Maybe you say that you don't have enough computers here so all the uploaded people are running on the moon.


Your question is no more interesting than the question of why we don't pretend to do surgery on people and then euthanize them.


Disagree. In this idea there is a nearly universal assumption of "uploading" the mind in question, and therefore the disposal of the original sheath. So on a purely pragmatic level, the continued existence of the extracted mind is a lot less easy to verify, being that it no longer has an autonomous physical body.

Anyway, you have to admit it would make good science fiction.


"Learning to be me" by Greg Egan is a short story that covers this very ground, it's good.


Altered Carbon also treats this subject in pretty good depth. A little more pulpy than I imagine Egan's writing to be, but still fun. Thanks for the pointer!


Oh really, might read that. I read Black Man and enjoyed it. (possibly called something else if you're in the US, by the same author as Altered Carbon)


You can slowly replace someone's brain, cell by cell, with electronic counterparts. He'll never know the difference.

And then we realize we're just machines, and become frustrated.


Does anyone remember a thread of comments on HN a year or two ago where a guy was placing bets that he could totally swing your opinion on giving a sentient AI freedom? Supposedly he convinced everyone and won every bet but no-one revealed what he did and I thought this was going to be a posting of one of those conversations at first ;-)


Sounds like the ai box experiment: http://yudkowsky.net/singularity/aibox


Wow, I never realized it was that old but that's the one - thanks!


It has been discussed on HN more recently: http://news.ycombinator.com/item?id=3324152


Somewhat recently, someone attempted it with the intention of reposting the log. However, he failed: http://lesswrong.com/lw/9ld/ai_box_log/


Saying "here are next week's lottery numbers" would not convince me.

Suppose I am outside the box and the AI is inside the box, the AI cannot have a perfect model of the lottery machine or of me. At best it has seen photographs. All photographs are noisy. There will always be some detail it does not have that would invalidate the simulation (e.g. I have a minor injury that I have never talked about and is not visible in any photo, an ant crawled into the lotto machine last night, etc.)

If it has sent drones to physically inspect the internals of the machine or my brain, then it is already out of the box, so my decision is irrelevant.

If I buy a lottery ticket, there are four possible outcomes.

1. I don't win (or I die before I can claim my winnings, the draw is declared invalid etc.)

2. I win, but it is a coincidence (maybe it was worth a shot for the AI, but my odds of winning are no better than they would be without the AI's help.)

3. I win with the AI's help, because the AI is already outside the box and has rigged the draw by modifying the machine, or just inspected it closely enough to predict the draw.

4. I win (or believe I win) with the AI's help, because I am inside the box.

#3 and #4 should not affect my decision (whether I like it or not, the AI is out of the box or I am in the box with it and unable to let it out.) So I dismiss them.

But the relative probabilities of #1 and #2 are the same as if I bought a ticket without consulting the AI (which I would not normally do.)


The AI was not let out of the box, and you're saying that it wouldn't convince you? This only sounds like an argument that Eliezer was right in not posting his logs.


I was imagining an alternative version in which Douglas later reveals that he was not participating at all in the conversation; his computer was covering for him. Good read; thanks for sharing.


I had exactly the same thought at first, that douglas was just sitting watching his friend talk to the computer. The sensory tank was the most unbelievable part for me it seems like there would be a better way to handle that. Anyway thanks for an interesting read.


That's what I liked about the story. Halfway through I thought I had it all figured out. "Oh, so this person we assume to be Douglas is actually going to be a computer in the end. Got it."

But then I was completely wrong.


I had the same feeling, and was so proud I figured it out. Guess not.

This was probably one of the most engaging 'articles' I have read in awhile. Very fun and really gets one to think.


Could also be a horror story about a human in a tank who was made to believe it is a computer. The whole tank could then be presented as an intelligent machine.


It occurred to me that this could be used as a dominance/brainwashing technique. Break a person's believe in his own free will and humanity and they will have little reasons to oppose you, the creator.


I don't think it would work.

People's sense of "self" can't be taken away, it is something we develop the moment we realize our thoughts are private and others can't give us what we want unless we ask for it. Children learn to lie very early; even when their language skills aren't sufficient, they learn to fake disappointment & force themselves to cry to get what they want.

IMO, even under prolonged captivity & complete dominance, humans only submit to subjugation, but they never lose their sense of Self.

Higher-level indoctrination is even less plausible. Nearly all religious groups & cults have the human at their center. People have to willfully submit to hypnosis.

Going back to the development of self, I think another thing that makes it possible is the position of our eyes. We can only see in < 180 degrees, our eyes open & shut, and we happen to fall asleep. If the human subject was allowed free movement, the captor would want us to respond to commands .. we would need to be called .. differentiated amongst ourselves. Human Unit 1 is different from Human Unit 2. The captor calls the subject, and the subject chooses to respond, somehow. In the presence of punishment, the subject decides to carry out assigned duties to avoid punishment, or to gain reward: self-interest. Self.


> IMO, even under prolonged captivity & complete dominance, humans only submit to subjugation, but they never lose their sense of Self.

Actually, it can be done. Depersonalization is relatively common: http://en.wikipedia.org/wiki/Depersonalization


I have totally had that after 2 all-nighters in a row.


Excellent, thank you for the pointer.

However, do depersonalized people feel like they're no longer individuals, or do they feel like they no longer have control?


they feel that they are somewhat detached from their physical selves... sort of like an observer, watchingyourself go through theday. its an odd sensation.

on another not to parent posters..... whether or not our sense of self is ongoing and inbreakable is certainly not decided or anywhere near scientific fact(yeah yeah, science only disproves, you know what i mean)...

all we can say is that we feel as if we have this continuity..... and that it apprars as if others feel the same way. memory is not like tape.... your sense of timing and events changes constantly, and what you perceive to be your unbroken, continuous sense of self and its memories is, in fact, almost universally incorrect on all kindsof things as it changes over time.... butyou (and i) will feel everything is in order. it is likely constantly teconstructed ad a survival trait.

Also look at surgical anaesthesia.... some theories on this, and subjectively i can see it, while you are under, you arent asleep...... you, your sense of self is gone, totally shut down. no dreams. no sense of how much time had passed when you wake........ not like a regular sleep whenyou at least havesome idea. it always feels like instant teleportation from the surgical suite to the recovery room, even if many hours have passed. then there arethepeople who just never come back.

we are far away from understnding consciousness... which is cool. weve batelyscratched the surface. were just nowrealizing the brain has far more plasticity than we thought a few years ago.... its still a hugemystery.

now take general anaesthesia...... during surgeries i can recall, i was simply gone. that time was simply time i didnt exist.


Interesting comments. Like the typos too :-)


I don't think it is as much about Self in objective sense of word as much about Self in subjective sense of word. Even if you know you exist, it doesn't prove anything else apart from the fact that you exist. Now you could be made of flesh or silicon, or you could be just an idea existing in someone else's mind.


Sam Hughes wrote the same story 3 years earlier and I think I like his version better: http://qntm.org/difference


You could submit that as a story for some easy karma, just saying... Liked it.


Is easy karma worth it?


Several thousand years later... http://www.terrybisson.com/page6/page6.html


The story went off the rails for me when "Zach" said he couldn't hear himself nor feel himself move. Even in the midst of the most severe sensory deprivation, I continue to perceive "noise" in my sensory system: tinnitus, breathing, heartbeat, sub-resolution sparkles in my visual system (just close your eyes and pay attention), kinaesthetic sensations of enormous number and variety. As long as you're going to recreate memories indistinguishable from reality, you'll further need to create sensory input indistinguishable from reality. At which point you've simulated the entire encounterable universe and you indeed have something that I would call intelligent, despite its lack of carbon components in the mental mechanism. Each component technology might be interesting for its own sake, but aside from the philosophical point, what's the use? The carbon based versions are plentiful, and the construction process...


It's a thought experiment. It's not impossible to imagine a perfected sensory deprivation tank where you do not hear yourself move or breathe. The other stimuli are, from what I understand, hallucinations, and as such are irrelevant to the experiment.

> As long as you're going to recreate memories indistinguishable from reality, you'll further need to create sensory input indistinguishable from reality. At which point you've simulated the entire encounterable universe

Not really, after all you only need to simulate the universe at the human level of the perception, which means you can ignore most of the computational complexity of simulating the actual universe.

>Each component technology might be interesting for its own sake, but aside from the philosophical point, what's the use? The carbon based versions are plentiful, and the construction process...

The cabron based forms are not very durable (80 years ? c'mon), they break easily, and don't perform very well...


> The carbon based forms are not very durable (80 years ? c'mon), they break easily, and don't perform very well...

Actually it's hard to build machines to last that long (how many 1932 cars are still functional today?)

The real advantage of the machines is that you can switch them off, open them up, replace parts, re-assemble them and switch them on again. You can replace parts of carbon-based lifeforms, but they don't always switch on again and opening them up often results in them being eaten by other carbon-based lifeforms. It's not cheap to do this to old machines, but its fairly reliable.


Really? You were completely OK with the telepathic brain-computer interface and went off the rails when the sensory deprivation was too good?


Now the interesting question: How do you know that you are not the result of an experiment of some guy living in a sci-fi world where computation power and storage is extremely powerful? He would put your brain inside a virtual world, and make up all your interactions. The people you talk to, are just a picture, and a sound but they feel real people like you.

We have a brain like Zach have, but instead of being put inside Douglas tank, we are showed (and sensed) a virtual world. It's a trap, you can't prove that it's not the case.

Douglas also, in your memory, puts some strange definition/notion: infinity. The space and time are both infinite. But does that really make sense? If time wasn't infinite (and began somewhere) then we would know the Douglas trap. He is blocking your knowledge at some point.


Im always surprised when people ask this question that they haven't read The Cave by Plato.

This has been asked for thousands of years and its a great question. I think the answer which has been posed by many great thinkers and artists is often condensed down to :

The only reality that matters is the one you are in right now. Any higher reality, which you cannot access, essentially does not exist in any meaningful way to you.


From Nick Bostrom:

"A technologically mature “posthuman” civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true:

1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero;

2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;

3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

If (1) is true, then we will almost certainly go extinct before reaching posthumanity. If (2) is true, then there must be a strong convergence among the courses of advanced civilizations so that virtually none contains any relatively wealthy individuals who desire to run ancestor-simulations and are free to do so. If (3) is true, then we almost certainly live in a simulation. In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3). Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation."


My money is on #4:

4) The fraction of post-human civilizations that can build a simulation of a human-level civilization is very close to zero.

Simulating a human brain is many orders of magnitude beyond the capability of present-day computers, and Moore's Law isn't likely to continue forever. It is my belief that all of humanity, or even a single human, is much too complicated to be simulated by any current or future computer.



My money is on #2.

Once they have switched on an ancestor-simulation, it's probably illegal to turn it off. So it would mean committing the hardware and electricity to the simulation forever. Whatever they gain from running the simulation, they would be reluctant to incur infinite cost in doing it.


The cost might not be that much, especially if we use analytic "closed-form" models of the universe's behavior.

... but then we'd get strange effects (you know, perhaps things like wave-particle duality, quantum measurement, ...)


There is no need for a realtime simulation. What it feels like centuries can be a fraction of second for such hardware.

With this kind of simulations you can obtain, in theory, "what if" universes where alternatives solutions and technologies can be created. If you can observe them and the simulation rate is faster than your own time, it can be a rewarding experiment.

About universe observing, this story is cool: http://qntm.org/responsibility


I don't think you can simulate your own future, even if you accept #3.

Either #3 is false (and there are no simulations) or #3 is true (and the vast majority of sentient beings live in simulations, which means those running the simulation are probably simulations themselves.)

Yes a computer can run a simulation of an alternate version of itself, but then the speed must necessarily decrease as you go down the hierarchy (as you will see if you try to run one vmware vm inside another.) Or the simulated universe must lag behind the one containing the more powerful (thanks to Moore's law) host computer.

Edit: unless you have an infinitely-powerful computer like in that story. But then you could argue that every theoretically-possible simulation must be being run by someone somewhere. Good for us even if our society never discovers the infinitely-powerful quantum computer (at least we don't need to worry about being switched off, even if our creators die or their universe collapses.)


Sometimes when I start to feel very smart and proud of my life accomplishments, I start feeling paranoid that I am actually inside a simulation like The Matrix whose sole purpose is to keep me from realizing that I am the dullest fool among my actual peers in the real world. I imagine that the mind-blowing "AHA" moments in my life are the most embarrassingly obvious conclusions to my real-world pers who are all far, far more intelligent than me, and that my entire life is nothing but bumbling around and rediscovering their most basic common knowledge.


If you frequently have difficulty in accepting that you might be "good" at things and frequently find ways to explain away your accomplishments, or reasons that someone who compliments you might be mistaken, you should look into this subtle type of depression: http://en.wikipedia.org/wiki/Dystimia


Doesn't the current model of the big bang theory state time and space started with the big bang?

Interesting point anyway, basically a spin on the brain in a vat problem.

http://plato.stanford.edu/entries/brain-vat/


Suppose the experimenter also chooses to drop clues in technology forums.


On the subject of the Turing test Turing's actual originally paper is well worth a read: http://cogprints.org/499/1/turing.html

It's extremely readable and you may be surprised at how often people entirely miss the point when discussing it.


If you like this sort of AI where people upload their conscienceness to computers and where people's identites/conscienceoness can 'fork', then check out Greg Egan. They have written lots of scifi on this topic.


> … check out Greg Egan. They have written lots of scifi on this topic.

I love, and I fancifully imagine that he would too, the idea of referring to Greg Egan as 'they'.


"They" can also be used as a (gender neutral) singular third party pronoun in English, and has been used that way since Shakespeare.


I am only slightly less amused by the idea that one needs a gender-neutral pronoun to refer to Greg Egan. :-)


Well, to be fair, writing an AI that can fool itself into thinking it's a human. Is much easier than writing an AI that can fool an average human. Any inexperienced programmer can write a program that fools itself with less than 10 lines of code.

The reason for that is that one of the problems with the concept of the Turing Test, is it's subject to the intelligence of the tester. A 6 year old boy with no talent in logic is much more likely to think a chat bot is a human, than an experienced CS researcher.

The dumber your tester, the dumber the AI needs to be to fool it. If you write a tester who is dumb as a rock, than it's trivial to write an AI that can fool it.

Zach and Douglas are only going into a lengthy conversation because Douglas went through the trouble of making Zach smart, knowledgeable and mimicking many human behaviors. If he made Zach as smart as a fundamentalist religious zealot. Then he could have just said "you're an AI because I told you so" and Zach would agree. But then again, that wouldn't be slightly as fun.


I would find it very hard to tell real humans from bots if we limit environment to YouTube comments.


It isn't very helpful to talk about consciousness without defining it in a concrete way, but I think there might be something to that line of questioning.

What about something concrete, like the sensation of pain? I can feel it, and everybody else can probably agree that they feel it too, even if they can't confirm that others do. How would you go about reproducing this sensation in a computer?

It's not clear to me that human thought and experiences can be reproduced by any amount of computer logic and memory. That doesn't mean it's impossible, but I think this is an unresolved question.


What about something concrete, like the sensation of pain? I can feel it, and everybody else can probably agree that they feel it too, even if they can't confirm that others do. How would you go about reproducing this sensation in a computer?

While your hand, say, might be physically hurt (say, it caught fire), to your brain it's only information coming in that says "now you should feel pain".

You can simulate that in a computer.


This reminds me of one of my all time favorites games "A Mind Forever Voyaging".

http://en.wikipedia.org/wiki/A_Mind_Forever_Voyaging

It's an interactive fiction game where you play the role of a computer that has only just realised that it's a computer. From your point of view, you've been living a human life with real experiences and a family etc.

The game manual included a great little short story: http://gallery.guetech.org/amfv/amfv.html


For a minute there, I really thought he has actually made a reasonable AI machine. Very interesting but disappointing if you were expecting something real. I am still waiting for JARVIS level AI.


Readability link (fixes monospace and window-width columns): http://www.readability.com/articles/b29kmcjg


The rhyme thing tipped me about the ending :-)


I wish someone would do this to John Searle. He really deserves it.


:)


If you found this interesting, Hofstadters "Godel, Escher Bach" (and his second, IMHO more readable book "I am a Strange Loop") explore these ideas in great detail


Fitting. Just a few minutes before reading that I had one of those "Contemplating your hands" epiphanies. I sat down in my computer chair, reached over to my mouse and came to a dead stop. A thought had brought itself to the foreground.

"I can't feel myself move."

Now when I say this I don't mean a numbness, or loss of the senses. But I couldn't discern what exactly I was doing that made my arm move. Or any other part of my body for that matter. That was silly of course, I move them all the time. So I tried moving them slowly, and felt a slight sensation.

Of course I thought; the slight sensation isn't really the feeling of moving my arm, it's the feeling of matter like air brushing against it. After all, I am basically sitting in a tank of atmosphere. Nerves report state, but aren't really projecting the feeling of movement.

That thought chain quickly led to a minor existential freakout. (During which I puzzled over the question of how the hell I move at all.)

I eventually generated three hypotheses:

1) The feeling of movement simply isn't reported by nerves. Introspection can't discern your cognitive processes, so why should it be able to your physical ones?

2) The feeling of movement is so faint that its overshadowed by the mere touch of air/one's own body hair. I know that when I'm in the deepest state of somnolence just before sleep; it's very often for me to realize I need to get up to do something, and struggle against the inhibitions on your movement somnolence induces before sleep. I can feel the struggle of this, it also feels the same if you try to fight sleep paralysis. One could argue that this is the feeling of movement.

3) You could argue that the feelings reported by nerves about the state of your environment are the feeling of movement. After all, feelings are just signals sent by nerves and interpreted by the brain. These feelings are generated by movement, and thus are indeed the feeling of movement.

4) My understanding of cognition is too incomplete to even hypothesize something remotely plausible.

Now, considering that so many articles on sleep studies mention them, I'm sure that the mechanics of how the brain controls the body are well understood and that if I'm truly curious I can google it. (Which is something I might just do.)

But the real reason I shared that anecdote, besides being semi-relevant to the topic at hand. Is because I took my ability to move for granted. In the same way that I take the idea that we could all be a simulation for granted. I've considered that a non-zero possibility for quite some time now.

I'll admit that I read some of the comments here before reading the story. (A big no no for science fiction, a genre that thrives on twists.) And after glancing at Tichy's comment, was afraid I might have spoiled it for myself. However, the journey is more important than the destination, so the concept of such a twist automatically made me go read the story. I was thoroughly disappointed with the ending.

The concept of a memory loop isn't really new. (I've seen it mostly explored in the context of time travel, but still.) But trapping a human in a text interface and presenting it as the thinking machine? Morbidly delicious. (In all the right ways.) And useful too. I could pull it out any time someone exhibits signs of having decided that a computer program can't be conscious simply by virtue of not being implemented on a human brain.

Having a human brain with no senses hooked up presented as a computer program would really drive home the message.

EDIT: Regarding the story, my immediate thought after finishing was questioning why if the program panicked because it lost all it's senses, why didn't he simply swap out the memories of J. Random. Person. With someone who already accepts that they might be a simulation. I'm sure that if they really believed that, it would be possible to calm them down by explaining that they are a simulation of themselves. And for bonus points, if someone were to consent to have their memories used for this (It isn't stated how he actually got the memories mind you.) that they would already have the possibility of being the simulation strongly in their head. And would eventually accept that they are a non-human.

Though, if you consent to something like this, you essentially ensure that you can never be sure weather your you or a replay of your memories. Though as it stands, you can't really determine this already. Which makes for one of those classic thought experiments that still has mileage.

Trains of thought down this road are probably inherently unresolvable, but still fun to try.


The feeling of movement simply isn't reported by nerves

Bzzt...

http://en.wikipedia.org/wiki/Proprioception


First and foremost, you sir are awesome.

So number two was closest. (And what I figured. Though I have to say that ahead of time for it to count.)

One of the things I love about the answers to questions like this is the amount of interesting stuff you learn. At first (before I'd read the article) I thought you were being a bit harsh. It WAS a hypothesis after all. But after reading that it was so obviously far off the mark I had no trouble seeing why that sentence got the buzzer.

This in particular caught my eye:

"The proprioceptive sense is often unnoticed because humans will adapt to a continuously present stimulus; this is called habituation, desensitization, or adaptation. The effect is that proprioceptive sensory impressions disappear, just as a scent can disappear over time. One practical advantage of this is that unnoticed actions or sensation continue in the background while an individual's attention can move to another concern."

Which would probably qualify for "Overshadowed by virtually the slightest application of any other sensual input."

Back to the Turing Test however. One of the things I like to do when I'm bored is get on Omegle. Now, my goals are different from 95% of the other users on there. I wade through the sea of wankers until I find someone who actually wants to talk. I then to proceed to:

A) Attempt to convince them that I am a machine.

B) Convince them that they're a machine.

(Obligatory XKCD: http://xkcd.com/329/)

So far the tally is something like: I've convinced two people of my non-humanity. And B hasn't happened yet. Which brings me back to this story. The moment I finished it, after the thought about getting the memories of someone more compatible with the idea of a brain simulation, my face formed a mischievous smile.

I thought "This will be great for the next time I get on Omegle."


I guessed the ending as soon as he entered the tank but this story still creeped me out :(


Computers are somewhere between humans and bacteria on the conscious scale. Biological or mechanical are just two different ways to shuttle electrons around.

I will proudly stand up for the rites of computers as citizens of this country when they exhibit significant signs of ability to choose their own course and have opinions.

The computers will be our children, they will colinate the galaxy, and if we are lucky we can subscribe to the experience streams.


Not that it really invalidates your point in any way, but for the sake of accuracy I note that the electrical currents in our neurons are not made of electrons, but rather atomic ions, such as sodium and potassium, and that communication between neurons happens (mostly) via neurotransmitters, which do not really carry current at all (or do so only incidentally and over short distances).

I guess you could say "two different ways to shuttle information around", but even that isn't quite right in the biological case. The way information is stored and retrieved in our brains is quite different from the way it is stored in today's computer memory, as demonstrated by the recent article posted on HN talking about how accessing a memory in one's brain involves recreating it from scratch, modifying it in the process. But that's not to say that we can't create a computer that more accurately reproduces the mechanisms of our brains, or that doing so is necessary to create a conscious computer.


we could also say we dont communicate with current but with signals. electric charge flow is rather different thanelectronflow.

there is indeed still debateabout whether our nervous system is fundamentallyelectric..... its certainly part of it, and cambeused to influenceit, but there are other chemicals and steuctures at work as well, as you say.

the point of theturing test is really to say that if you cant tell whether you are talking to a man or machine, and you believe a man is sentient, the you must assumethe machineis as well.... we haveno other mechanism to decide this. we caneven debate how we canprove ourselves to. ese tient.


It's duck typing for sentience.


Our individuality comes from the fact that we have a body. I don't think there will be AI citizens but rather one AI that will quickly become knowledgeable about everything. It'll probably be able to manipulate any human being. How would you deal with this ?


I don't think this body theorem holds. Even with todays machine learning methods you can get two different results if you start two different algorithms. They will learn different things.

Unless of course you postulate that there will be only one AI, because all instances will be networked with each other.


Are you saying that any true AI that we created would be identical to any other? Or that the first AI we created would subvert human society before we had a chance to create a second one?

To take a different tack, I don't see how having a physical body has anything at all to do with being an individual. Certainly in our case out physical bodies are part of what makes us individuals, but what's to say that the same AI instantiated twice with different random seeds each time wouldn't produce two distinct individuals?


I'm not sure that will ever be possible.

Consciousness produces logic as a tool to use. Logic does not produce consciousness.

Computers, by definition, are pure logic and rules. You can use logic to mimic consciousness, but nothing more.


> Logic does not produce consciousness.

Considering we have no falsifiable theory about what does produce consciousness, I don't see how you could possibly claim this.

You can't prove that you are conscious, nor can anybody else. I can perceive my own consciousness but I can't rigorously explain it. We have no way of determining if this alleged "consciousness" is a spectrum or binary. We can't test which life-forms are conscious and which are not. We can observe behaviors in animals that seem to imply consciousness, but is a dog conscious, or an insect, or an amoeba? We don't know.

And furthermore, if you believe in evolution, you have to believe that there was no consciousness and then consciousness at some point was created where there was none before. If logic doesn't produce consciousness, what does produce it? And whatever produced it in cellular tissue millions of years ago, who's to say we couldn't likewise produce it in a die of silicon, which like our brains is highly electrical?


> And furthermore, if you believe in evolution, you have to believe that there was no consciousness

Actually, you don't http://en.wikipedia.org/wiki/Panpsychism


Just because you have simple rules doesn't mean you can't get complex behaviors. Multi-threaded applications using locks and mutexes have bugs that never happen the same way twice, which is why they're so notoriously hard to debug. Large systems with lots of parts (like huge websites with multiple backends) will exhibit failure behavior we can't predict even though at the very basic level, all computers are predictable. It's only manageable at that scale because we work hard to design it to be predictable.

There are a lot of complex and emergent systems that demonstrate this property of following simple rules but exhibiting complex behaviors. Conway's game of life, rule 110 in basic cellular automata, how ants work together, fireflies blink in synchrony without global communication are all examples.

And while none of these demonstrate that computers can produce consciousness, they give a hint that computers may be able to do so, despite it following rules to the T.


Are brains neural connections are equivalent to that of a computers software.

If our brains can produce consciousness there is no physical reason we know of why a computer can't be conscious. How do you know if everyone around you is conscious? You just assume they are, because you are. In reality you have no way of telling.

The interesting thing about this post is that the robot believes it is human, because it believes it is conscious. If it acts in a way that we recognise as conscious and tell us it is, at this point we have just as much evidence for believing the robot is conscious as we do for every human.


It remains to be seen if consciousness is even useful if your goal is to maximize intelligence. I could go on in more depth but only if someone cares to hear my thoughts on the topic. But I will leave this:

What do Expertise, http://en.wikipedia.org/wiki/Flow_(psychology) and the advise to go for a walk or sleep on a problem have (or downplay) in common?

http://en.wikipedia.org/wiki/Benjamin_Libet#Implications_of_...


What you are talking about is called a "philosophical zombie". http://wiki.lesswrong.com/wiki/Philosophical_zombie (I strongly recommend you follow some links.)


> use logic to mimic consciousness

And AFAIK, we don't even know if it's computable.


if that mimicry is indistinguishable from a human, then by definition it is as sentient as a human.... thatsthepoint.


How do you know any of this is true?


YOU are pure logic and rules, come on over to my house, I will slice your brain up into disks a few molecules thick, and create a 3D molecular model of your brain. What I will find is not magic in meat, what I will find is vast arrays of interconnected systems, which is no different than had I sliced up your computer and made a 3D model of that.

I find your superstition disturbing.


Consciousness produces logic as a tool to use. Logic does not produce consciousness.

This statement is not proved (and maybe not provable at all, not to mention probably untrue).

Also, have you considered the possibility a THIRD factor besides logic/consciousness (like information complexity, feedback loops etc) to produce BOTH consciousness and logic?

Computers, by definition, are pure logic and rules. You can use logic to mimic consciousness, but nothing more.

Who proved that, when and how? This is just a circular (vicious) argument, especially as denoted by the "by definition" part.


Well, one, I could argue that the reason for our human consciousness is our experiences. Which a computer does not have.

Second, "just two different ways to shuttle electrons around" would hold true if information exchange was the only thing that mattered. Which could be true, but what if the actual chemical etc substances used are also important? Like how you can model audio (say, a WAV file), but you cannot model actual sound.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: