I've been thinking about this a lot, and I don't think AI will ever become conscious in the same way that humans are, rather, it'll be conscious on its own terms. Just like I can't know the mind of my wife, I can't know the mind of my cat, I also can't know the mind of an AI.
Depending on who you ask, a monkey or cat isn't conscious. From my point of view, I don't think you can really know that or recognize it. It's just a matter of degree of consciousness. I think it's safe to say that mammals are conscious to some extent. They have emotions, dreams, communication techniques, etc. we just have those (to some extent again) moreso than they do or we have them in different ways.
I think a question to ask is at what level or organization do we recognize self-direction? Am I conscious because I think so? What does that say about the bacteria that live inside of me that I rely on, or the individual neurons in my brain?
If both I and a dolphin are mostly the same, we have brains with neurons, we have blood cells, etc. how can you truly differentiate what is conscious and not? Even if you speak to another human it's not completely possible to say that they are conscious with certainty - only with what's most useful in day-to-day life.
At what level of circuity do we consider AI to be conscious? When it completes arbitrarily constructed tasks by humans? When it "feels"? How would you differentiate between sufficiently complex AI? Is there just AI or not? why?
> I don't think AI will ever become conscious in the same way that humans are, rather, it'll be conscious on its own terms.
I think that people who are attempting to simulate a human brain using an utterly biomimetic design stand a good chance of artificially creating something that is conscious in the same way that humans are. I also think it's possible they may be able to achieve this before they full understand how the human brain works. i.e. if you copy the design accurately enough, the machine may work even if you don't know how.
The resulting consciousness could theoretically be totally self-aware, but no more capable than we ourselves of modifying its own programming with intentionality and purpose. i.e. not the singularity.
I think there should be two different concepts of AI- "a consciousness using the same processes and design as our own", and "an essentially alien consciousness that fully understands itself". And I suspect that even if some engineering genie gave us the first kind of AI, we'd be no closer to developing the second kind.
AI can be conscious most certainly. Figure we can make bio robots in 100 years. All that needs to happen is building brains in the lab and bootstrapping them. The first versions will be somewhat mental but 2.0 most likely a better representation. After all, all species are quite mechanical and predictable—albeit complex.
Depending on who you ask, a monkey or cat isn't conscious. From my point of view, I don't think you can really know that or recognize it. It's just a matter of degree of consciousness. I think it's safe to say that mammals are conscious to some extent. They have emotions, dreams, communication techniques, etc. we just have those (to some extent again) moreso than they do or we have them in different ways.
I think a question to ask is at what level or organization do we recognize self-direction? Am I conscious because I think so? What does that say about the bacteria that live inside of me that I rely on, or the individual neurons in my brain?
If both I and a dolphin are mostly the same, we have brains with neurons, we have blood cells, etc. how can you truly differentiate what is conscious and not? Even if you speak to another human it's not completely possible to say that they are conscious with certainty - only with what's most useful in day-to-day life.
At what level of circuity do we consider AI to be conscious? When it completes arbitrarily constructed tasks by humans? When it "feels"? How would you differentiate between sufficiently complex AI? Is there just AI or not? why?
/rambles