Hacker News new | past | comments | ask | show | jobs | submit login

Some pretty sloppy definitions and a faulty assertion right out of the gate. I’d argue it’s plainly obvious that humans are adapted to living in a world populated by conscious objects, we already treat inanimate objects as if they had personality and intent. If we came across one that actually did have a mind we’d immediately know how to interact with it as long as we could communicate. As for that definition of consciousness as that which goes away under anesthesia or when we sleep, just look into the varying states and gradations of awareness that occur all the time during those processes. We’re not a single light bulb that just turns on and off. They also ignore the all important self awareness aspect for some reason. This type of badly framed argument isn’t worth your time to read.



You call it badly written, but assert that "we’d immediately know how to interact with it as long as we could communicate"?

But not being able to give a definition of consciousness doesn't matter. We all sort of know what it is, and it's not good for a computer to have it.


Is a cat conscious? I would imagine that if you'd ask 100 people you'll get a lot of different answers. I'm not sure that "we all sort of know what it is" really cuts it here, even if only to make sure we're all talking about the same thing instead of talking past each other.


It doesn't matter whether it really cuts it or not. Do you want to run some unknown, but potentially large risk just because you can't precisely define what it is? Good luck arguing against climate change.


Why is it bad for a computer to have consciousness but good for a human?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: