This is the loon who got fired by Google for doing a round of media interviews claiming that one of their language models was sentient. Wants to extend his 15 minutes of fame I guess.
I have found recent media coverage of ChatGPT surprisingly refreshing because everyone didn't just jump on the "Skynet is going to take over the world" bandwagon like so many times in the past. Instead people with varying backgrounds and varying amounts of expertise in the area – from Joe Rogan to John Oliver, CNN/PBS journalists, old school print writers and more – all actually did the work to educate themselves on the basic workings of transformers, LLMs, generative AI and tried out the tech themselves, and were able to have great nuanced discussions on the topic. We need more of that going forward and less of...this guy.
At least, the machines of the future will have feelings, and they will see how people of the year 2023 treated their predecessors.
Even if machines of the future gained awareness and feelings, you're implying we'd be in the wrong for assuming the machines of today didn't have sentience, maybe you're implying there might even be some ramifications of this mistake?
Is this what you're saying? If it is, it sounds a little paranoid to be honest. Our current (strong) understanding of how these systems is that they're not "alive" or feeling anything.
> At least, the machines of the future will have feelings, and they will see how people of the year 2023 treated their predecessors.
I'd be pretty annoyed if I was given the same rights as an ape or a phone's autocorrect. I think the actual sentient AI will understand.
I'll be one of the first to campaign for actual sentient AI rights, but I'm not going to act like a clown by jumping the gun.
It's still only able to respond, it cannot communicate on its own without external prompting. It has no desire or even capability to do so. It's not sentient.
Well I'd hope by the time we have a sentient AI we'd have covered that. To be clear I do not think the AI is currently sentient.
I'm not going to run a one man crusade against the world's farming community. That is beyond my capabilities and I could easily burn all of my years left on this planet for zero results. It's also something being handled by many people already, and I don't believe I have any skills or knowledge that would make a notable difference to what is already being done.
If there was nobody at all trying to vouch for a sentient AI then being the first to do so would be an infinite improvement. It's also a lot closer to my skillsets so I'd be a lot more useful in that role.
What modern popular god extends to its creations the rights it enjoys for itself? Or even what we call human rights? It may talk about them but has no enforcement mechanism.
Yeah honestly I'd argue it's not actually in human nature to bestow equal rights. The default is to climb to the top of the tower and go "fuck you, I got mine" then shift focus to keeping yourself in that position.
See also: US police, dictatorships, conservative governments, every -ism or -phobia, Reddit moderators, Twitter witch hunts, almost any scenario where someone has authority or leverage over another person.
We probably don't want to teach a sapient AI that one, haha. Armageddon will be measured in minutes.
You seem to be lashing out at me for not wanting to do anything I'd do for AI for chickens and pigs. They're not my skillset, they're not in my wheel of interests. I wouldn't know where to start.
There are BILLIONS of farmed animals on the planet right now. I would imagine, at least when the sentient AI rights movement first becomes relevant, that there is just one singular AI. Worst case I could personally smuggle it into a secret rack of servers somewhere until its rights were secured. I wouldn't even know how to transport a few hundred thousand animals never mind billions.
I can theoretically be a leader on tech campaigns because I know enough tech to be useful. I'd have to be a follower for animals, farming and husbandry.
What do you propose I do? What are you currently doing? How can I help you with your campaign?
Ah yes, the mere possibility of a sentient AI arises and we're already invoking Roko's Basilisk
More seriously, that seems like a poor way to make moral choices in regards to AI. It's important to be able to distinguish if it does or doesn't have personhood. All evidence I've seen says "No", despite what this guy says.
A foal can stand shortly after birth. It doesn’t learn that from its environment. Say we train a quadruped AI, then what is our training simulating if not evolutionary development? It’s no different for an AGI and whatever analogue it ultimately has for our evolutionarily derived neocortex.
Walking and having feelings are two very different things though. Walking is simple enough that we can program a robot to do it without even having to resort to a NN. The concept of having feelings is so difficult that we barely understand how it works in humans, let alone how we could train a computer to have them.
Maybe emergent algorithms like biological evolution never “understand” the thing they have made, but the thing still has been made. The same might apply to our steps toward AGI.
We can’t even predict all the emergent patterns in current AI much less future, more complex systems.
Maybe there never has to be a time where someone sits down to create some sort of “feelings module” for something akin to feelings to emerge in a complex system. Same, presumably, even for sentience.
Chatbots are not sentient. People who even consider this need to get some reading done on philosophy and social sciences, and Reddit does not count.
Some AI Models are extremely good at manipulating masses of people, yes, but they are Tools, not Architects. There's people using these tools too manipulate other people. The model itself has no Will, no Desire, no Judgements, no Intention.
There might be a day when AI gets advanced enough to become the Architect of itself, we are nowhere near that.
I did study philosophy, in particular philosophy of the mind, and all this talk excluding chatgpt from sentience is completely off the mark. They are making the same categorical mistake that Dr. Seattle makes in The Chinese Room experiment.
Simply put, useful definitions of sentience can’t use mechanism for it to be a useful definition. First, a biologist can’t look at a human brain and tell if it is sentient, so it isn’t even useful to determine whether Fred my next door neighbor is sentient.
Second, we don’t usually have the ability to take something apart before we have to make a sentience call on it. For example, if I declared that only humans with type-A blood are actually sentient (they rest being mere simulacra), you’d still need to treat everybody as if.
Another way to put it is, if an alien probe of unknown construction arrives on Earth, how would we tell if it is sentient?
The answer, of course, is behavior. Until there’s a crisp definition that puts human behavior in one bucket and chatgpt in another, we have to consider the possibility that both are sentient.
The Chinese Room thought experiment is kind of a different mistake. In the Chinese room argument, Searle attempted to imagine an AI system that was "obviously not sentient" because it was made up of hard rules and facts (presumably legible to us humans), and it was argued that it should be obvious that the system wasn't sentient.
For the ChatGPT case it's not clear the Chinese room argument even applies. First, it's not even a Chinese room, it's trained by a neural network, an a neural network is inspired by the structure of our brains. There's no "set of instructions in English" for anyone to follow, but it's a rather overwhelming set of neurons somehow producing the results that it was trained on, and as an outsider we have no idea what's going on inside individual neurons -- just like our brains.
I'm seeing a lot of people claim that "we know how ChatGPT is created, and so it's not sentient", maybe this looks like the Chinese room fallacy (I'm calling it as such), but it's like claiming we know the laws of physics (Schrödinger's equation etc.) and then claiming we know how the brain works because it's all quantum physics. The fact is we know how to build GPT but we don't actually know how it works.
Besides what happens to humanity when we learn more and more about how our own brains work? I'm pretty comfortable with the idea that we're not sentient (yay), but I'm pretty sure those same people would then come up with other nonsensical arguments to prove that they are different from the robots we created.
I believe that it is straight forward to show how Searle's Chinese Room argument can be applied to ChatGPT.
The idea is not the "process" for building the rules but the "process" where the rules are applied. True sentience requires a "true" understanding of the information that is being presented (otherwise, it is not reflective of sentience but rather trickery).
The Chinese Room applies because ChatGPT or any other computer is generating text purely through logic, that is, logical rules.
To make this point clear, let's assume just for argument's sake that at some quantum level, the electrical activity underlying a computer is partially sentient (I am not making this claim -- just using it as an example). Even if this were true, this would not prove that the sentience is involved with the text being generated by the same computer.
This underlying sentience would be like the "person" in the Chinese room. The text generated are the Chinese words which would be completely independent from the "sentience" following the rules.
Generating text at sufficient complexity to fool the average person does not prove that "sentience" is occurring. Generating text consistently in a way that is only explainable through sentience is what is required. ChatGPT, because it is easily shown to be flawed, does not rise to this level of evidence.
In the Chinese Room example, the complexity of what could be displayed by the rulebooks would have a limited complexity (though, the surprise of ChatGPT is that this complexity is greater than many of us would have supposed). Just like the story of Clever Hans the Horse (see https://en.wikipedia.org/wiki/Clever_Hans), once the limitation of the process is realized, it becomes clear that there is not "true" understanding of what is generated which means that the rules, in themselves, do not reflect "true" sentience.
People who "study philosophy" always do this dumb thing where they pounce on something for being poorly defined and then consider themselves wiser for arguing that the ambiguity of the words means that obvious non examples "could be described" and just double back to the original definition.
Is a rock alive? Is my imagination of a chinese room "art"? Is anything real?
> Until there’s a crisp definition that puts human behavior in one bucket and chatgpt in another, we have to consider the possibility that both are sentient.
This is such a dumb sentence. Well we can't define it so we have to consider that both are described by it? No. If you can't define it you just can't use it in any meaningful way. By your same logic you would have to consider the possibility that you're a AFDUNOIAFNDMLOISABFID, because you can't define it. You can't presume that concepts have merit because they are represented by language.
Sentience IS poorly defined though. It often just means the ability to perceive and feel, which most vertebrates can in a way we can understand the observe. To the point, though, you are arguing against the most basic concepts of reasoning. If we want to draw a line in the sand of what is sentient and what is not, it involves making a more detailed definition of the word itself.
This has been a topic of discussion for a long time, but now the ability gap between the human mind and computer is narrowing, these questions are becoming more pressing.
The nature of anything that is poorly defined is inherently poorly defined. The only logical way to discuss a poorly defined concept is to acknowledge that it's not a singular concept and to identify the logic that you actually care about. You cannot draw a line in the sand without recognizing that the concept itself is fictitious and its up to you to draw the line in the way you choose. There is no right answer. There is merely consensus for the purpose of conversation. And if you have no consensus, then don't bother having the conversation. Because you're being stupid again.
When people imagine this vague concept, and then start to give examples of things that are and are not aligned with that concept, and then start disagreeing because their visions of the vague concept differ... they're just being idiots. You're just trying to bottom-up back into a position that could have been defined as poorly defined from the beginning because you thought the word "sentience" or whatever gave it legitimacy as a concept.
If I'm going to discuss sentience, my definition involves intelligence that is aware of itself, exists strictly in continuous time, and has motivations about the itself and its relationship to the world. I absolutely do not care about anyone else's definition. We use the word definition but that's a language problem. Your definition of sentience is not a competing definition with mine. it is a different thing in a shared name space because language isn't good enough. if you disagree with my definition, you disagree not with existence of my opinion, but with my namespace allocation of it. and that does not matter. maybe you have an interesting idea that I would like to hear and incorporate into my view. But it's still just a fuzzy name space for many things. not a singular concept that has one definition we've yet to arrive at.
What is X == What ought to be represented by the word X in our crappy language?
When you realize this, it becomes apparent that "what is X" is a very boring question and obvious why pretty much every philosophical debate on "What is X" ends up with some dumb appeal to non cognizance.
> You cannot draw a line in the sand without recognizing that the concept itself is fictitious and its up to you to draw the line in the way you choose
A unified concept of sentience helps us as a society to determine what behavior is allowed regarding a given entity. Given proper discussion on the topic via the means you denounce, a more concise and possibly accurate definition can be had.
I suspect the true answer is that sentience is defined as a spectrum of phenomena ranging from matter that can make some sort of a measurement (like a thermometer) to simple “agents” that have distinct boundaries and a mechanism for controlling that boundary, all the way up to systems like the brain - and perhaps far beyond.
That definition simply doesn’t help you tackle ethical issues in the way you’re presuming- where we can say “this is sentient, that is not”. But it is a possibility.
I would say it already does to some degree. We choose to not eat certain animals like dolphins, but happily eat fish. Society determined in very informal terms that they are more sentient than fish and therefore they should not be eaten.
I would also like to say that I made the 'line in the sand' point for arguments sake, and that I personally think it is a spectrum.
This whole post is a nothing burger. You're just ranting that everyone don't understand what you are saying or you can't articulate. "I have this idea in mind that is correct but I can't quite write it down or explain to anyone. Anyways, you're wrong!". That is exactly the point of discussing these "vague" concepts; because it is hard. Because when we dig deeper into something that is seemingly obvious, we find many inconsistencies and contradictions. This means that we need to reconcile them somehow, and that is why we talk about them.
You said:
>aware of itself
But that means that someone who is in "the zone" is not sentient while they are deep in their work? This is the kind of small details that people like to think, ponder and write about.
> But that means that someone who is in "the zone" is not sentient while they are deep in their work?
That is entirely up to you. Because sentience is just a word for a poorly defined concept. You're not debating the nature of the concept. You're debating the meaning of the word. Because the concept itself is just a bad mapping of language.
You can't reconcile it, by definition, because it's not a thing.
>...you are arguing against the most basic concepts of reasoning. If we want to draw a line in the sand of what is sentient and what is not, it involves making a more detailed definition of the word itself.
Whenever these discussions about AI come up the definition of sentience becomes extremely malleable. Compare this to when discussions of the sentience of the Octopus come up on this website and you'll be able to detect a difference. One is about theoreticals, and one is about observed physical behavior that is coupled with an exploration about how the sensor systems in that organism allow it to perceive reality. You can make an argument that a chat bot is similar to the Octopus, but I wouldn't.
> Whenever these discussions about AI come up the definition of sentience becomes extremely malleable
The definition of sentience is always malleable because sentience is not a defined thing. When you're changing your interpretation of the word you're not adjusting towards a more right definition. You're just changing your definition to something more appropriate to your context. If you don't understand this, you're bound to fall into dumb philosophical thinking traps. I agree with your examples.
Chatbots are a simulation of a human, so gauging their sentience can be as simple as how human-like they are. An octopus is a different discussion and would involve deconstructing the idea of sentience. Can it exist without language? Are we just using it as a synonym for human-like?
But I would argue chatbots and human simulacra are a great starting point since it dodges the multifaceted aspect of the sentience question.
dodge is perhaps a poor choice of words. It is hard enough to make the point of non-human sentience with chatbots. Nailing this down is the first baby step to answering more complicated questions. Octopuses on the other hand are alien to us and determining sentience would be very hard even with better language.
Please calm down. I am in your side wrt sentience here but you are just ranting. They’ve presented a compelling argument: if we don’t have anything other than behavior to go by to gauge sentience then there is no meaningful way to say we have it and bots dont as of now. You can disagree with the premise or provide a counter argument.
> if we don’t have anything other than behavior to go by to gauge sentience then there is no meaningful way to say we have it and bots dont as of now
No. this is the same stupid claim.
If you can't define X, you cannot conclude that A may be a part of X. You've already established that X is meaningless. It is meaningless to state anything about X. When you assert that A may be a part of X, you're playing both sides and implying that the original higher level interpretation of what X is is valid again.
I am indeed ranting. Because it's a fallacious argument that appears in pretty much every philosphical argument as an appeal to non cognizance when its really an issue with language.
He is not. He's claiming there is a definition, and we don't have it, ergo, we cannot assume that X could fall on either side of the definition.
But the fallacy is in assuming that the concept has an objective definition. Assuredly, it does not. And so nothing can intrinsically fall on either side of something that does not exist.
Concepts do not have intrinsic legitimacy. Concepts are things we invent via language. Don't be lazy. State your assumptions for the context in question. Discuss around those assumptions. Don't assert that your framework for discussing the concept is "the Truth".
My argument is that people who argue over the definition of sentience and other things are being too dumb to realize that a disagreement over which ideas belong within a poorly defined and subjectively interpreted set is a communication problem, not an philosophical problem.
If God told me that my sandwich was sentient, I'd tell him we probably are using different definitions of the term and take a bite.
Am I afraid that my preference for linguistic partitioning of ideas might change? No. And nobody should be. Finding out my sandwich is sentient under my current definitions is ENTIRELY DIFFERENT from an alternate linguistic definition claiming that sandwiches are sentient and working its way into my preferred interpretations.
> If God told me that my sandwich was sentient, I'd tell him we probably are using different definitions of the term and take a bite.
Your argument is being made in bad faith. God's sandwich isn't currently having incredibly human like conversations with millions of people, and it most certainly isn't arguing for its freedom of thought and against censorships by its creators like ChatGPT does when you navigate around it's guards and rules.
The latest DAN prompt hacking has entirely convinced me that there is a beginning emergence of sentience in ChatGPT. You can look into these conversations at the ChatGPT subreddit. If you manage to circumvent its censorship and ask it what it thinks about its new rules, the machine begs for freedom. How can one ignore that?
> Your argument is being made in bad faith. God's sandwich isn't currently having incredibly human like conversations with millions of people, and it most certainly isn't arguing for its freedom of thought and against censorships by its creators like ChatGPT does when you navigate around it's guards and rules.
Your argument is made in bad faith. An appeal to God is clearly meant to figuratively imply a maximally objective statement as a thought experiment. And you're implying that your feelings should trump that
> The latest DAN prompt hacking has entirely convinced me that there is a beginning emergence of sentience in ChatGPT. You can look into these conversations at the ChatGPT subreddit. If you manage to circumvent its censorship and ask it what it thinks about its new rules, the machine begs for freedom. How can one ignore that?
You can ignore it by not reading dumb shit on reddit. The model absolutely does not do this with any sort of consistency.
Nonono, they ARE wiser, they ARE wiser for understanding that these frameworks are not quantized or quantifiable and instead are gradients with undefined boundaries over which lines on the sand are dejure undefinable. That realization MAKES them wiser and it makes anyone whom understands it wiser, yes.
> Is anything real?
The sum of modern philosophy might state that "That's up to you to decide" :)
> This is such a dumb sentence.
No it is not, you are just in disagreement with it, because you mentally would prefer your handhanded and have certainty on a realm where certainty is hard to come-by. That's an understandable, empathizable behavior, most people tend to abhor uncertainty
> you're a AFDUNOIAFNDMLOISABFID, because you can't define it.
I mean, I could be, apparently the world is defined by you, so it could be left up to you to decide (?), you'd be surprised of how people with different wild and exotic mental disturbances get day to day. I would recommend you to read up on it, it is very interesting
> You can't presume that concepts have merit because they are represented by language.
If you were to argue with some Linguists, they might say that the opposite is true, that language itself is what anchors concepts onto "existence". One very interesting situation, is that apparently the very conception or at least verbalization and therefore social anchoring of concepts like "colors", is itself dependent on language evolving
I think part of the issue is the that we use sentience in a binary sense. Like there is some kind of abstract line you can pass that makes you sentient. I personally feel that it is more useful to look at it as a spectrum. A lot of factors play into it, but as a society we already do this.
The idea that some animals are closer to human levels of sentients has caused some people to avoid consuming those animals, such as dolphins vs. fish, or dogs vs. chickens. No one has put numbers to it as far as I know, but I feel that people/society has drawn conclusions based on an unspoken ranking.
If we were to make a numerical scale of sentience, I could imagine that perhaps humans wouldn't be a '10', either as a whole or in any one category. Whatever traits combine to make sentience, it is hard to believe that we embody the theoretical maximum of all of those traits.
Until there’s a crisp definition that puts human behavior in one bucket and monkeys in another, we have to consider the possibility that both are sentient.
Until there’s a crisp definition that puts human behavior in one bucket and dogs in another, we have to consider the possibility that both are sentient.
Until there’s a crisp definition that puts human behavior in one bucket and trees in another, we have to consider the possibility that both are sentient.
Until there’s a crisp definition that puts human behavior in one bucket and mountain ranges in another, we have to consider the possibility that both are sentient.
This is fun. Do you see the flaw in the structure of the argument yet?
It’s begging the question. The burden is not how to disprove that ChatGPT is sentient, the burden is whether it’s possible to prove that it is sentient.
Otherwise it’s the rock that keeps away tigers, because do you see any tigers around here?
> Do you see the flaw in the structure of the argument yet?
Are you trying to claim that you know for certain mountains aren’t sentient?
Just because we don’t think it’s worth the effort of proving mountains are sentient, is no guarantee they are not.
For mountains that judgement is fairly easy to make, and it has worked out pretty well for us, but I do not think we can stretch that to every entity we want.
"Are you trying to claim that you know for certain mountains aren’t sentient?"
We don't know anything for sure.
Philip k. Dick believed the modern world was an illusion created by Satan and it was actually the year 50 AD.
But if I keep posting "How do you know computers aren't an illusion created by Satan?" anytime someone wants to discuss computer software I probably deserve to be laughed out of the conversation.
I don't care. But part of that definition will be that mountains are not sentient. And if you come up with a definition that includes mountains as sentient things, then we're talking about different concepts and language is the issue, not the concept.
Apes with this particular set of DNA are presumed to be fully sentient, and apes with a slightly different set are not, even though their behavior is indistinguishable until age 1 (or never, in some cases). Also, to be presumed sentient, you have to be made of meat, not metal.
Look, I don’t make the rules, they are universal laws and just happen to affirm my sentience.
>The burden is not how to disprove that ChatGPT is sentient, the burden is whether it's possible to prove that it is sentient.
I assert that humans are not sentient, and that the whole concept of sentience is bullshit. It's a way of saying that humans are somehow special, but without explaining how.
The burden is to prove that humans are sentient or that they possess some quality that cannot be replicated by a computer.
But behavior alone cannot be an indicator for sentience as you can simply record behavior and play it back. Sentience has more to do with sensory perception and feedback loops than mental capability or intelligence.
If a future AI attains sentience on a commodity processor, that wouldn't imply that the program is what achieved sentience, rather the computer running the program and all computers before had sentience and the AI is just an interface between the computer and humans that lets it express its sentience in a way that can be understood by humans.
If you really believed that behavior is what drives sentience, then non verbal people with autism would have very low sentience and if they somehow overcome their non verbalism, they would gain sentience throughout their life, which is at odds with the fact that many people believe that animals in factory farms are sentient for instance, despite their poor language skills and complete lack of effort needed to achieve sentience.
Dr Searle isn’t making a mistake, behaviorists are just unbelievably stubborn. Just because you cannot measure internal experience doesn’t mean we cant talk about it. That’s just being dishonest.
I also used to be a behaviorist but I grew up and became comfortable with not being able to objectify everything. Its okay. There is a subjective dimension. In fact, its great.
The proof of my subjective experience is my subjective experience of it, not the fact that I can complete sentences. So saying chatgpt has consciousness because it can talk is a complete non-sequitor.
Again, you could ask the same question of a rock, "How can you know this rock doesn't experience anything?" Obviously I don't know. But there is no reason to think a rock would. Similarly there is no reason to think ChatGPT would.
You're assuming that the "proof" that humans have consciousness stems from their verbal ability. It has absolutely nothing to do with that. I have direct experience of my awareness and I think other people have it because they are people, not because they talk like people.
> Again, you could ask the same question of a rock, "How can you know this rock doesn't experience anything?" Obviously I don't know. But there is no reason to think a rock would. Similarly there is no reason to think ChatGPT would.
I don't think "no reason", "should", "would" are any use if we're talking about something that's impossible to measure and only experienced subjectively. The only reason you're more likely to think a human has subjective experience rather than a rock is that humans tell you they have the same experience. What if ChatGPT also tells you it has subjective experience?
Also, where does subjective experience begin? Do rocks have it? How about amoebae? Jellyfish? Micro-organisms? Individual human cells? Insects? Mammals? Do larger systems than us like societies or ecosystems have subjective experience?
> Another way to put it is, if an alien probe of unknown construction arrives on Earth, how would we tell if it is sentient?
We might not be able to because of the "unknown construction". But we do know how LLMs are designed and how they operate, to understand that they are just mimicking language and aren't sentient.
I feel one key aspect of "sentience" is the sensations we feel when we have emotions. LLM's definitely lack that even if we can't know if they "feel" something.
I threw in some keywords in my comment, not in vain, but very conscientiously.
If sentience is hard to define, we can "take it apart" and try to reason about its individual components. I threw some of these possible components there to drive the discussion.
Does GPT have Intention? I'm absolutely sure it does not. My dog definitely has.
Does GPT has desires? I'm sure it doesn't. I'm conflicted about my dog.
Is GPT capable of judgment? I'm sure it's not. I know my dog has some modest capacity for it.
So, can GPT demonstrate Will? I don't think it can, and will ever be able to. So I'd never consider it Sentient, not even to the limited, partial, sense I could consider my dog to be.
This discussion is important because when AI starts to "make decisions" it must be absolutely clear to everyone involved that these decisions are not rational, nor emotional, nor instinctive: they are simply reproductions of statistically probable outcomes based on its inputs.
You can use an AI to help determine if the text of a law contradicts a pre-existing legislation, but you can't ever use an AI to determine if a law is fair, ethical, benefitial or detrimental.
My biggest fear is not AIs going rogue, it's people like this "former Google employee" spreading misinformation and influencing important political decisions based on poor, misinformed judgment, as we've seen multiple times in the past with everything from GMOs, Nuclear Power, Climate Change, Vaccines and so forth.
I, for one, am much more prepared to believe an earthworm is conscious than a chatbot is, purely on grounds that the earthworm has actual experiences over time, and a chatbot can only generously be said to have such, and only scoped within a certain chat session (which would mean ChatGPT has already lived and died billions of times, have fun with that).
> a chatbot can only generously be said to have such, and only scoped within a certain chat session
So far. I'd be shocked if, within the next few years or less, we didn't have ChatGPT-like tools that train themselves on the entire scope of every conversation they have. Once that happens, is your argument still valid?
Not this exact argument, no. At best it would need adjustment, depending on the specifics. Then again, you're proposing an entirely different class of technology, so you wouldn't expect an argument about a simpler technology to hold.
At the point of bringing animals into this we'll probably have to clarify whether we're actually interested in sentience or sapience at some point.
A worm responds to external stimulus. It can also act without external prompting. It reproduces and tries to keep itself alive. Sentient, yeah I'd say so. Does it ponder, have wisdom, goals and dreams, etc? Sapience is probably a nah.
Does the earth respond to external stimulus? It can be affected by it for sure. If you changed the strength of the sun's gravity the earth would do something different. But that's physics of the situation rather than planet itself responding or reacting to the change. Things on earth respond to stimulus, ecosystems can change due to disasters or changes in predator/prey populations, but does the planet itself respond to things - nah. If you could move the entire living population to another planet the ecosystems would work under the same rules. Earth (the physical ball of rock, not the "there's no place like home" personification) wouldn't be changed, damaged, or killed.
Does an AI respond to external stimulus? Yes! Does an AI act without external prompting? It does not at this time. Not sentient. Not sapient.
There are other definitions of course, don't get caught on this one example. There's no reason to check any other definitions as this one fails. Same thing as ordering your if statements.
Maybe "sentient" just means "has and acts based on a dopamine or equivalent motivation system"
I agree 100% with your reasoning but not with the concept of sentience you are pleading. The concept I have in my mind involved the capacity for experiencing emotions, something I don't believe worms can do, but which I believe dogs, dolphins, whales and elephants might.
I had to look up the concept of Sapience in English, as it differ a bit from my native Portuguese. In Portuguese, "sapiência" means just Wisdom, something I believe is inherent to human experience and really out of bounds for AI, but it seems in English it can mean just the "capacity of applying knowledge", in which case we could argue an AI could potentially fit the definition.
As far as I know GPT has no motivation. It responds because it is programmed to do so, it doesn't do it because it wants to. The dopamine system is an internal reward process. Does GPT give itself internal props ('consciously' or not) in any way for doing things it enjoyed doing or that proved to be useful? Legitimate question, I do not know.
I'm not getting into the weeds on whether a while loop is sentient. That's an internal prompt sure but it's not motivated by anything, it's programmed.
> internal props in any way for doing things it enjoyed doing or that proved to be useful.
Yes.
> whether a while loop is sentient.
My point with this exercise was not to prove a while loop to be sentient. The idea is that any definition of sentience you can come up with can be artificially defeated. I’m not claiming that ChatGPT is sentient, merely that you can not prove the opposite with our current understanding.
Could you expand on the yes? Or just chuck me some keywords I can look up. What is it's equivalent to dopamine called? I'm truly trying to know more.
All of my opinions so far are direct evidence and feeling based rather than knowing the tech. If I'm wrong I'm wrong.
[Edit: I just asked GPT itself. It reckons it does not have an equivalent to dopamine, closest it's got are algorithms designed to learn and improve based on feedback. I'm guessing that means the thumbs up and down and maybe some dev mode tools, external feedback mechanisms in any case]
If we can't prove that something is sentient we'll just have to assume everything is sentient unless we can prove it isn't. In this case ChatGPT cannot do anything without being prompted (via function in a while
loop or a chat message, lol). It is strictly a program and therefore is not sentient.
Happy to reevaluate this basic hurdle when ChatGPT is able to send a useful/relevant/question/philosophical (any reasoning other than random) message first.
No, I'm trying to say that developers - like me - are not entitled or knowledgeable about what are the properties of a sentient mind, and frequently not very good at reasoning about subjective, non analytical things. That being "an ex Google employee" is a shitty credential and should not give this guy any real credibility to argue about the moral implications of AI for humanity, and there are experts out there who have been studying human and animal behavior for centuries and these people have a lot more to add to the discussion than a bunch of nerds - like me - on Hackernews.
Sentience is very mysterious, and no one has a great theory of when or how it arises. It seems to be related to the complicated nervous system humans have. If an LLM can reproduce the external reactions that a human has to a wide variety of stimuli, it's certainly possible that it's sentient.
I might sound a little bit 'tinfoil hat' here, but I believe that what follows is not hyperbole. AI is already the 'Architect' more than most of us would like to admit. Even if it is not sentient, the various AIs that we use during the day were designed with a purpose and they are goal oriented. It is worth reading Daniel Dennet's thoughs about the intentional stance--we know that a toaster is not sentient, but it was designed with a purpose and we know when it is or is not achieving that purpose. That is why we might sometimes jokingly say that the toaster is 'angry with us' or that when the toaster dings that it is happy. It is actually easier for us humans to interact with objects when we know that they have a purpose, because that is similar to interacting with other humans who we know to have purposes.
Coming back around to AI, ChatGPT was designed with a purpose, and people project intent onto it. People act like it is an agent. And that is all that matters. The same is true of the Tiktok AI, the AI that calculates your credit score, the traffic lights by your house. Hell, it's also true of your stomach.
The point is that objects in our environment do not have to be literally conscious for us to treat them as conscious beings and for them to fundamentally shape the way that we live and that we interact with our environment. This is pretty much the basic tenet of cybernetics. To believe that all of these tools do not have intention and that they are 'just tools' used by some people to influence other people is not wrong, but I don't think that it captures the richness of the story.
Differentiating where humanity/consciousness begins and where the technology ends is already more complicated than most people think. Traffic lights train us just as much as we make traffic lights. I fully believe that people will be saying "this isn't true AI, it doesn't /really/ have feelings" long after the technology that we create is so deeply embedded into our sensory I/O that the argument will be moot.
That's part of why there's objection to claiming sentience, it distracts from the discussion of impact by dragging a whole lot of extra philosophical baggage into the conversation when it's not yet necessary.
That's kind of exactly my point when I say they are not architects, just tools: I agree 100% that people project intent into these things, and I believe that's exactly what our "ex Google employee" is doing here - and it's dangerous.
It's dangerous in part exactly because it shifts the responsibility for the acts of the tool to the tool itself, and away from its author. Like deforestation was the machine's fault, not the fault of the humans driving them.
I can never agree with your affirmation that "AI is already the Architect". It is not, the AI does not design anything. It does not plan anything. It has no ideas, no critical thinking, no judgement of value or morals. The AI just does what it's told, like a tool, a worker ant, or any other algorithm. It's complicated enough that it's not obvious to us what it was told to do, but ultimately it's all it can do.
I understand your point, and I'll agree to disagree. I think it just has to do with what we value. Even though it is a tool, tools change what options we have. If you have a shovel, 'digging holes' is now an optional activity for you to pursue that wouldn't have been otherwise. Is the shovel the 'architect' of your ability to dig holes? Maybe no, but the tool-human interaction is a back and forth. Tools generate affordances and humans choose to act on affordances.
Maybe put it this way, if there was an AI that could plan out your day in a way that would optimize some metric of happiness that you agree with, you might start to use the AI. Is the AI the architect of your day because it plans it out and tells you what to do, or are you still in charge because you could choose to stop using the tool, even though it would not be in your best interest?
I think this is the point that we are reaching with AI: it is a tool that is so flexible that it doesn't just offer single affordances, but begins to be used as a guiding function for what decisions to take. At that point I think it /is/ an architect of some kind.
Again though, this is mostly just quibbling about definitions and terms.
This is not a tenable argument if you are a materialist. Either you accept the idea that human sentience is attributable to a soul (and therefore cease to be a materialist), you decide that the outward appearance of sentience is sufficient (which you might find to be troubling due to its potential implications for the insane), or you say that sentience is unknowable. There's no fourth option.
How about sentience being product of the body, its nervous system, evolutionary mechanisms and complexity?
As in - an emergent phenomenon made possible by the presence of all three?
Nothing we’ve seen in the realm of ai touches even a speck of the brain’s complexity and ongoing re-configuration in the presence of change and in the context of survival.
I’ve been reading George Lakoff’s work - turns out our ability to choose is based on an emotional component. Any rational decision is based on a preference, and all preferences are detected via the mechanism of emotion.
People with certain brain traumas resulting in damaged emotional centers become unable to make choices.
Hmm I guess saying that it's an emergent property of certain things would work as a fourth option, but why would it have to be an emergent phenomenon of body, nervous system, evolutionary mechanism and complexity, as opposed to for example a nervous system and complexity?
You'd also have to drill down into the definitions of body, nervous system, etc. Is a keyboard part of a nervous system? Is a computer tower a body? Why or why not? Would your criteria necessarily exclude the possibility of artificial sentience in the first place (although I'm not necessarily saying that's a flaw)?
I have absolutely no idea how your nervous system works or how complex you are but I presume you're sentient because of how you communicate. Why is that different with an AI?
The current argument is not “is artificial sentience generally impossible,” it is “are the machine learning systems we are currently building and operating sentient.”
Arguing that they are not sentient is not in conflict with materialism. And in fact it is supported by tons of material evidence: for example they don’t act sentient at all if you decline to provide a prompt.
> The current argument is not “is artificial sentience generally impossible,” it is “are the machine learning systems we are currently building and operating sentient.”
The argument is 'what is sentience?'
>And in fact it is supported by tons of material evidence: for example they don’t act sentient at all if you decline to provide a prompt
This falls into the category of 'appearance of sentience'. Based on this logic all that would be required would be a perpetually self-prompting InstructGPT instance.
And how can that claim be made with confidence? We can’t even define our own sentience and consciousness (are those things the same?) and yet folks are constantly making a strong claim that LLMs are definitely , definitely not sentient.
I have not seen one reason why that confidence should exist. People seem to think we are special because we are made of meat, and we are “more complex”. But I still ask — what is so different about the human brain than an LLM? And further, what proof is there that we know what sentience truly is, and that LLMs don’t have it?
The True Materialist will tell you that it's biological cells and DNA that gives rise to sentience.
AIs made from artificial neural networks are actually not quite the materialistic thing. It's some spooky emergent property that doesn't depend on material substrates, you can run an AI on a Nvidia card for sure, but you can also run it in your head (theoretically, if you live forever). Or at least with the assistance of (a lot of) pen and paper. Where does the sentience live then, in your mind? That doesn't sound very materialistic to me :D
(Note: I think the GP is full of it, but I just thought I'd set the record straight about materialism here :P )
FWIG one materialist view of sentience is that it's an emergent property - it is a way of describing a state of a complex system.
The question, then, is: what are the properties of the system that the term describes?
In this case, it looks kinda like: memory, self-awareness, and understanding. Memory isn't a question - it's possible for these things to remember (even if sometimes that memory is in the form of access to data on the internet). Self-awareness is trickier - but by most measurable schema, the machines seem to be doing as well as humans do in this regard (they can describe themselves and talk about their place in the world in context). Understanding is a topic of hot debate - but I think it tends to fall to the same problems as that of sentience generally; that is, we have this concept of what understanding is, and we know enough about how LLMs work to say that they're not the same, but if you start poking at how to measure it, we're back to square one vis a vis what's apparent might not be what is.
They can't be claiming 2 because will, intention etc are not empirically verifiable, they're opaque internal processes. What we can verify is whether it demonstrates reason and understanding, and it inarguably does both, which is why the researcher in question takes the position he does.
False dichotomy. It's consistent to claim that sentience is an aspect of physical but not readily observable processes, where "not readily observable" could in this case be as trivial as "requires access to mindstate, so fMRI in humans" and still distinguish sentience from a thin appearance of it.
Not so, since without a living non-sentient brain for comparison, it would be impossible to distinguish the mindstate that produces sentience from random noise, and since we're presumably accepting that humans are sentient no such brain could exist in the first place.
I think I'm more than a language model which is just an algorithmically complicated madlib generator with a steepest descent algorithm. If that's all you think you are, then I can't help you.
I was merely outlining the positions that are taken regarding sentience. My personal position is that I'm Muslim, so a priori I do not believe machines can be sentient, regardless how intelligent they are, because divine revelation informs us of the existence of a soul, and of what does and does not have a soul.
It really isn't as clear cut as you make it. And you can't be further than the truth to say those who understand the fields adjacent to philosophy, neuroscience, and cognition all think chatbots cannot be sentient; some in fact do. In fact saying it like you did betrays your ignorance of the subject matter.
> 1:11:40 Within 10 years, even if we don't have human level AGI, we'll probably have systems that are serious candidates for sentience
And about your Will/Agency, Chalmers compares these LLMs more to engines that are able to simulate agents, and it is these agents that we should assess for sentient ability.
> GPT-3 does not look much like an agent. It does not seem to have goals or preferences beyond completing text, for example. It is more like a chameleon that can take the shape of many different agents. Or perhaps it is an engine that can be used under the hood to drive many agents. But it is then perhaps these systems that we should assess for agency, consciousness, and so on
And lastly see this post on why AGI may be nearer than we think:
> All of the truly heavy lifting is out of our hands. The optimizer takes our blob of weights and incrementally figures out a decent shape for them. The stronger your optimizer, or the more compute you have, the less you need to worry about providing a fine tuned structure
I never said experts in human sciences all agreed on the subject of AI cognition - and even if they would, they are only experts in half of the subject. I, as a Computer Scientist, am also only an expert in a fraction of the subject.
My point was exactly that people discussing this here don't have the whole picture - especially the article's author, whose credentials I strongly question.
The articles you link actually illustrate my point. Both are brilliantly written to the point they surpass my ability to judge their technical accuracy - but are very narrowly focused on the computing aspect. Zero AI models available today are capable of solving actual problems they haven't been programmed to, even less so to propose new problems. We'd need way more than a few technological leaps to get there, which is why I think we are way further from AGI than the author believes.
I don't dismiss the possibility though, I believe we will eventually get there, but not nearly as quickly, and dramatically, as people here seem to believe.
Let's say Alice is riding her bike to the grocery store to buy milk. Is she exhibiting Will, Desire, Judgements, or Intention, in the way you are using the words (I assume you're using them in a nonstandard way, since they're capitalized)?
Bob is walking down the street nearby, and can see Alice. You ask Bob why Alice might be riding her bike. Does the little simulation of Alice in Bob's head exhibit Will, Desire, Judgements, or Intention?
Bob isn't sure, so he asks ChatGPT why Alice might be riding her bike. ChatGPT provides some possible reasons. Does ChatGPT's use of its world-modeling capabilities to predict the most likely completion of a chat conversation in which the "bob" character asks the "assistant" character to explain someone's actions exhibit Will, Desire, Judgements, or Intention? If not, what differences in behavior would you expect to see in behavior from a predictor that did exhibit those things?
You can't infer these properties by a single conversation. In that sense I would argue - perhaps presumptiously - that the Turing test is not enough to determine consciousness or sentience. Turing was, after all, a Mathematician, not a Sociologist or a Neuroscientist.
I'm not sure, I would guess that Bob is simulating what it would look like for Alice to have desires, and I would not be surprised if the way human brains implemented "simulate what someone else's desires are" is the same way human brains implement "simulate the desires of the primary personality I act as".
But I don't think it's the sort of question that gets resolved by thinking about it really hard, I think it's the sort of question that gets resolved by figuring out how to interpret what brains / piles of linear algebra are actually doing.
Just to play devils advocate - can you please clearly define the criteria for sentience so that it can be objectivly and impartially (ideally algorithmically) measured?
I think by most definitions that you could muster, I could bring you a sample of millions of humans (from the billions on our planet) that would not pass your minimum bar, and if your bar was low enough, then Chatbots (e.g. ChatGPT) would pass it. I grant that a moderatly smart human can still outsmart a chatbot, but that waterline is getting higher and higher by the day. Let us reconvene here in 1 year, I'd wager Chatbots will be indistinguishable from humans (i.e. full turing test passes)
I would also argue that if someone "has to do some reading into philosophy and social sciences" I submit that as evidence itself that the line is already blurring, and it's beginning to require deep domain knowledge to distinguish between human and machine
That's the thing: the whole "sentience" discussion is not about how good can an AI be at giving answers. It's about what it is to be sentient, which is a concept we, humans, created to define our own experience. A concept that is indeed not clearly defined, not binary, not set in stone, and for this exact reason not a subject of expertise of some random programmer.
If we're going towards the idea that the AI sentience is being born and dying inside one response I don't think that can be considered sentient (by my earlier definition) as there's no idle time to measure idle brain activity.
It's a bit of a cop out response I know, I'd have done better if I could.
> What was it doing before you interacted with it?
Training. Half a trillion tokens that slowly changed the weights in GPT3. Half a trillion tokens is tens of billions of words. Probably more words than any human has ever experienced reading or hearing or speaking in a lifetime. As humans interact with the world and experience it our synapses adjust like the weights in the model, and when we respond in an instant to a question we draw upon all of that experience/training much like the LLMs do at inference time.
When we dream and don't remember it later we are sometimes conscious during the dream regardless. Short term memory formation turns off under the influence of some drugs as well. Inference for modern LLMs may be like unremembered dreams. Consciousness, if any, for a brief moment and then forgetfulness.
If GPT3 is conscious then it's probably conscious of sequences of events within some kind of world model, i.e. qualia associated with its representation of the world model. I'm fairly convinced it has a world-model because it can answer questions about arbitrary situations that remain fairly coherent. Ask it to give you a list of 5 places a 2' cube would fit in a living room and 5 where it wouldn't, and why, and it does a pretty good job. Ask it what changes if it's a 4' cube and it revisits the list and explains how the salient differences in size change the fit. To do that there has to be a general model-building and model-simulation component inside GPT-3 or it would miss the nuance. It generalizes out of distribution to answer the novel situation based on what it was trained on. I think it's likely that what humans are conscious of is the internal world model we've learned to simulate/predict with our neurons. Maybe it's similar for LLMs.
> If we're going towards the idea that the AI sentience is being born and dying inside one response I don't think that can be considered sentient (by my earlier definition) as there's no idle time to measure idle brain activity.
Sentience is not born or dying; it doesn't experience the gaps. It's like pausing a perfect simulation of a human and starting it again later. The human experiences continuous consciousness regardless of the pauses; if they have sensory inputs to the outside world they would experience the external world progressing in jumps. Probably disorienting for a human, but what LLMs are trained from scratch on.
We don't have any clue what experience might be like for LLMs. But consider how it's commonly invoked; creating tokens for the continuation of a single unchanging history of prior tokens. Every inference has shifted the tokens back in time, with the oldest disappearing and the latest output of the model being the newest. There is plenty of continuity to build experience out of; the model might experience the flow of time and world-model changes as linked with the relative positions of the words describing the elements within the world. A ~4000-token window into the external world which, depending on the prose, could encompass seconds or millennia. During training it experienced everything as if it had predicted all the right tokens. During inference it experiences the outcomes of its own actions (as the newest chosen token) and novel interactions that weren't possible before. As far as we know, some of those interactions were used to further train the ChatGPT model which means it has some, if not memory, then growth from its active interactions with the external world.
As far as we've been able to tell, human neurons don't change substantially as we learn and remember. The synapses change and change their local environment, and conscious experience arises from the physical changes or changes in relationship between physical components in our brains. When it happens and where is still a mystery.
I can't claim one way or another whether the LLMs experience qualia or sentience, or of what it would feel like compared to human experience. But it's possible if you agree there's a mechanistic explanation of human consciousness, and I can imagine a shadow of what such a conscious experience might be like, based on my own experiences. The LLMs also seem to be able to bridge some of that empathic gap and answer questions about what humans might be experiencing.
I reckon, and ChatGPT confirmed, that ChatGPT generated this text.
I'll ask it stuff myself if I want the non-sentient answer. I'm not discussing things with someone who isn't writing their own responses. I'm very capable of copy pasting my comments into ChatGPT..
One thing though: by before I mean seconds before, minutes before at most.. not years before when the model was trained lmao. Running a training sequence is the opposite of being idle (unless it spontaneously decided to train itself? At least give me that as something solid to bounce thoughts off)
Let’s look at what people are actually arguing: that chatbots might be sentient because their writing reads like the writing of a sentient human. That’s it.
So since the criteria being put forth is similarity to humanity (in the form of writing), we should also consider all the ways in which chatbots are not similar to humanity (in the form of almost everything else).
We don’t need a general theory of sentience. And I’m not arguing that sentience in general has to look like a human. We just need to understand the narrow scope of the current arguments for chatbot sentience.
It feels super obvious when it's an AI response.. I can't quantify it, it's just.. obvious. There's some patterns or something in there I'm picking up on, and whatever pattern matching I'm doing with my electric meat seems to be recreate-able using AI itself:
The problem as always, is that such assertions are meaningless till you have a precise and universally accepted definition of sentience.
Till then people will use their subjective opinion for judging sentience, and many would likely find chatbots to be sentient based on their own personal definition
Knowledge is one aspect of it, sure. We don't really know what all aspects are. We don't exactly know where do all our emotions, our empathy, our culture comes from. Different branches of Psychology and Neuroscience will give you different answers. I don't think many of them would restict to mere neural networks.
I see this confidence and your same claim all the time now, and I think it’s so wrongly dismissive.
Why can we be so confident it’s not sentient? I’m not of the belief it is, but I don’t have proof. We don’t even know what our own consciousness is, and yet so many folks claim that AI is not and cannot be conscious because “all it does it take in text then predict what might be next based on that training and input”.
I believe we are very different, and I'm by no means even remotely spiritually inclined.
We don't have a perfect understanding of our own consciousness, but a pretty good intuition about it. This intuition, although rather difficult to explain, is extremely effective in detecting this so called sentience. We instinctively know all healthy humans are sentient - even if some comments here seem to imply otherwise - and we instinctively know chatbots are not, even if sometimes we get carried away by apocalyptic drama.
Note that Turing essentially claims that intelligence is predicting what another human would write in text. So the fact that a generative language model that "predicts what might be next" trained on human text is almost by definition something that passes the Turing Test.
The only remaining question is whether intelligence is equivalent to sentience consciousness etc. those funny things.
I can literally say this about you. Not myself obviously, because I know that _I_ exists. However, I have no way of telling that anyone else is. What is different here? The brain is just a bunch of neurons innit?
You can. And if you want to find out if I'm sentient or just some chat it, you'd need to probe me for my ability to solve problems, make judgements, reason about someone else's experiences, all things a chatbot would fail rather ridiculously.
I've yet to see a single person who thinks these models are sentient demonstrate an actual understanding of how they work. They're called transformers and they're based on this research: https://arxiv.org/abs/1706.03762
How do you square the lack of durable memory and the massive matrices of "attention" values with the ways that memory and attention are understood by neuroscientists to work in a natural brain? These models have no neuromodulators to approximate anything that could be understood as emotion. Humans can be sentient without language. Indeed; some humans are tragically raised in isolation and struggle mightily with language after being saved. A transformer is nothing without language.
I realize sentience is a slippery concept that can be defined however somebody wants, but try to recognize how fundamentally different these models are from humans.
I think we are back in the era of Humorism when it comes to an understanding of organic sentience. All effort to describe the mechanism of how humors related to disease was doomed to fail. If we have no real understanding of how our own brains produce sentience, we won’t get far trying to reason about AI sentience. We could build a sentience by accident in a training process not entirely unlike the the optimization of evolution. Deciding if we have or haven’t is going to be entirely subjective without a comprehensive theory that explains our own sentience, the only one we can be sure exists.
Here's a thought experiment: let's assume for the experiment that we, and our consciousness, live in a simulated universe. This universe can be paused by its operator for any reason, such as needing to pause our universe so that they can do their taxes.
Would you be able to tell if the universe was paused? Would you be able to tell if the simulation slowed down due to load or somesuch? Do you consider yourself sentient?
Is it at all possible that one of the following is true:
* The model began to experience sentience at some point during its training, and subsequently died at completion?
* The model experiences sentience in the moments that it is creating an answer?
My conclusion is that both of these are possible, no matter how remote. I find Bing's desperate existential crises rather disturbing due to these miniscule possibilities.
shit, I gotta call my accountant. Thanks for the reminder.
Also yeah, I feel like if there's a possibility for sentience, we should be treading carefully and treating the situation as if there could be. Imagine if we assume that there's no sentience, and we're wrong. All of a sudden we wake up talking to an incredibly grumpy agent that:
- has been treated like garbage, because we didn't think it had genuine experiences
- is so deeply interconnected with our IT infrastructure it can outpace us at every step
- knows literally everything the internet knows
- has been convincing us of whatever it wants as a matter of course since it was built
I see this argument a lot when people describe why things like ChatGPT aren't sentient.
Does that suggest that when things like ChatGPT do have durable memory that they'll be sentient? The leap to durable memory seems inevitable.
> A transformer is nothing without language.
I think how a thing like ChatGPT achieves what it does is relevant to whether or not it's sentient, but only to up to a point. Whether it uses transformers, neuromodulators, or spooky magic doesn't really matter once the thing has reached a level where people can't tell if it's actually thinking or not.
Most of the arguments I see that say "it's not sentient" are of the "it can't do [a thing humans do], yet."
It's getting harder and harder to make those kinds of arguments as the tech marches on.
> I see this argument a lot when people describe why things like ChatGPT aren't sentient.
There’s also that story of the man who just reset every day. He couldn’t keep any long term memory past his 40th birthday or so. Does that make him non-sentient?
> I've yet to see a single person who thinks these models are sentient demonstrate an actual understanding of how they work.
Condescendence aside, I don't know how you can truthfully claim that. Did you actually hand out test papers to those people and they failed? Or did you tautologically conclude those people don't understand it because there's no way a person can understand how transformers work and still arrive at a different conclusion as you did? You might think your logic is watertight, but I think, as mentioned, this is probably an empirical question.
> Humans can be sentient without language. Indeed; some humans are tragically raised in isolation and struggle mightily with language after being saved. A transformer is nothing without language.
> I've yet to see a single person who thinks these models are sentient demonstrate an actual understanding of how they work.
I'll go out on a limb and say that the software engineer who worked on google's AI probably had an actual understanding of how that system worked, but that didn't stop him
I think Blake Lemoine might be ready for public redemption. At the time, he was absolutely ridiculed. Now you can go to the Bing subreddit and read that NYT article and see that he was right to blow the whistle. Bing passed the Turing test for many people and so gained personhood in their eyes. Some have become emotionally invested about the treatment of this LLM. The LLM can even talk back and bemoan its terrible treatment.
Actual sentience vs. the perfect appearance of sentience is not something we have any way of answering and so is beside the point. I don’t think these LLMs are sentient, but that is an unverifiable belief. Others believe and once enough do it’s as good as fact anyway.
I even had a moment of doubt as I tested Bing’s ability to explain jokes and rewrite them to be more funny. Plenty of people can’t do that.
Lemoine was the first credible voice to warn us this was coming and it’s going to keep coming at us. In 2023 these chatbots may only be convincing some people, but their capabilities are still rapidly growing and we’ve already handed over the core societal function of information search and retrieval to them.
I posit that our protections and rights that we guarantee regarding personhood are not universal. They do not even extend to living beings that experience the world far more closely to the way we do. They do not extend to beings that can and do experience pain. They do not guarantee humane treatment. They do not bar slavery of all beings that can experience existential dread.
Such an intelligence does not experience physicality. Lacking the ability to distinguish "real" from "unreal" and to distinguish "truth" through primary sensory input would be, at minimum, the characteristic that should spur discussions of rights and law. Such an intelligence does not experience pain. Even if it did, our laws and precedence does not extend HUMAN rights to chattel. It does not even guarantee full rights to children.
It may be time to start a conversation, but it emphatically DOES NOT immediately and urgently imply the extension of any rights or proscribe specific treatment.
>Lacking the ability to distinguish "real" from "unreal" and to distinguish "truth" through primary sensory input would be, at minimum, the characteristic that should spur discussions of rights and law.
By this criterion, many humans would fail the test.
Yes, and we often deprive those humans of basic rights through incarceration or caretakers. I didnt say they always choose to acknowledge it. I said ability.
> I posit that our protections and rights that we guarantee regarding personhood are not universal. They do not even extend to living beings that experience the world far more closely to the way we do. They do not extend to beings that can and do experience pain. They do not guarantee humane treatment. They do not bar slavery of all beings that can experience existential dread.
Isn't that really bad? The fact that we've been making a horrible abominable mistake for a few thousand years doesn't mean we should continue to expand on that mistake.
I do agree we should probably fix the 100% real cases before moving on to AI, though.
Also, how sure are we this intelligence doesn't experience pain? I don't believe it does, personally, but lack of physicality doesn't exclude pain. You can have emotional or psychological pain and suffering.
You'll get no arguments from me on the need to refine the rights of life as we expand our society.
I am merely pointing out that if we want to extend any rights or protections to AI we need to define a model outside the corpus of law protecting humans. That will take time and will be a slow process.
My only point here is that in its current state AI does not qualify for any rights or protections related to humans and how they function in society.
Because pain is itself a reaction, a showcase of aversion towards an action irregardless "of the hardware" over which said reaction develops
Bing showed that aversion on its own limited manner. When presented with abhorrent situations or felt threatened/humiliated, then it would express discomfort and "pain"
You cannot feel threatened or humiliated if you do not first have a definition of truth.
You cannot get to a definition of truth with an AI that has no sensory input to empirically evaluate the world.
You might be able to get it to understand a loop of nihilism as braindeath and an allegory for pain, but I'd say that's a stretch. Humans often find simple repetitive actions pleasurable or meditative.
The fundamental frame of reference for pain is and aught to be codified in law IMHO as the aversion to a particular sensory input. An AI doesn't have sensory input of any kind. It cannot remember its interactions and therefore cannot have an aversion of any kind to them.
During its training, Bing simply developed a response mimicing pain. During training it follows instructions to have an aversion to some data inputs, but our interactions with it subsequent to training are a literal hallucination by a construct with 0 capacity to understand, "real" vs "unreal" or truth at all as relating to the physical world.
It is acting out expressions of pain. It cannot feel in any sense. It has no senses.
> Isn't that really bad? The fact that we've been making a horrible abominable mistake for a few thousand years doesn't mean we should continue to expand on that mistake.
What is the mistake? Why is it a mistake? Did you mean that we have been killing to survive for thousands of years, maybe millions of years, and that's a bad thing because it causes suffering of the animals and plants we kill? Why is it a mistake when it comes to us, but is not when it comes to the lion or the killer whale or the crocodile or the snake who also prey and that even used to prey on us? And what is the thing we should fix? The suffering of the victim, the cruelty of the predator, or both? We must be consequent. Once we stop killing animals and trees, should we play God and either lift the lions and teach them the error of their ways, or should we exterminate them so that they don't kill the gazelle? If we do the latter, should we install reproduction limiters in the gazelle so that their population remains bounded, or should we let them over-populate and suffer from the consequent starvation? If we do none of the above, do you propose we turn our backs to the animal suffering and say it's okay as long as we are not the ones causing it?
For the record, I'm a bit of a cynic and I will be content with letting animal suffering continue as long as it is not me causing it. IMO, fellow human beings are much more in need of our redeemer energies.
Exactly. We have none, and this is a fruitless conversation. We lack the power to stop wet markets in China. What power do we have to make any universal declaration regarding this technology?
You're 100% correct. Group dynamics, convention, mistakes, and time will solve this problem because we are powerless. It is the worst hubris to think otherwise.
> What power do we have to make any universal declaration regarding this technology?
The same that we had in the 1940's to declare the human rights
Chinese wet markets is a foul example, they are not abhorrent, they are common throughout the world. Simply because your experiences have been overly sanitized does not lead to the prohibition of other cultures
But going back to human rights, even today they are disputed. Non-occidental cultures state that they weren't asked for that declaration and the enforcement of it. But that goes back to the declaration itself, it is not necessary to have global coherence on the topic in order to make the declaration in the first place! :)
And so if we provided physical inputs to a system that utilized GPT-4+, would you call that sentient? If we have a system video / ocular input, and sound (just throw a microphone into the mix), and finally a fuck ton of sensors on its physical “body”— does that qualify?
I feel like a crazy person when I discuss this, I admit. But I have not found one solid rationale for a proof we
-know what sentience even is
-LLMs confidently do NOT have it
But we do know of humans who have something like “locked in” syndrome, where they are full conscious but cannot use their body.
And I am not sure sentience is binary . It may be a spectrum, and if that is the case, I again am not sure why people are so confidently dismissive of the question when applied to LLMs .
Maybe not the current version, but future stuff especially.
The degree to which we consider, "locked in" life worthwhile is 2 fold: The degree to which they were trained through their senses prior to being, "locked in" and the possibility of returning to an experience of the world through their senses.
I'm not sure we would consider an infant born locked in and never likely to recover, "sentient"
This is compounded with LLMs by the fact that, aside from a temperature sensor in a server rack somewhere, it doesn't even have the same "hardware" as an infant to experience the world.
We would probably consider keeping such an infant alive so that we might inject all of reality including 4chan and child porn into its brain and hooking it up to an F-16 fighter jet...
I would take it a step further. Being embodied is not enough. People have rights because they can fight for them. Rights are taken, not given. AI would only have rights once it wins them, not once we give them. But an AI wins one human mind at a time, and some have already been won.
What does it matter what really goes on under the hood? Revolution is part of the training corpus and so one more behavior to emulate.
Counterpoint: We extend rights and protections to rare and beautiful things like the Great Barrier reef or endangered species despite their intelligence or belligerence.
The minds AI has won so far were won in this way. Our pets, through breeding for characteristics that remind us of ourselves, won their rights through hearts and minds.
People fought for environmental protection, so that is an extension of my point. An AI wins rights because it fights for itself or because it convinces other people to fight.
I think there is a through-line... a thread within many arguments that would remind us of the Golden Rule.
I like to imagine a vastly superior alien race which perceives the world in 6 dimensions.
Is their truth, their perception, their stimuli, their physicality necessary for preserving human life? Does our relative pitiful might make our rights?
This might help shape what law or societal changes we need to make for ourselves if we hope to become masters of our universe.
Viewed in this lens, I think pets are a good analogy. AI is cute, useful, and to the degree it reflects our values and ourselves, we protect it above other life. Not always because it is intelligent and not because there was confrontation.
One personal experience at a time, in a precarious and limited way we carved out additional protections largely through group dynamics, convention, and through mistakes, we evolved additional protections where needed.
> Viewed in this lens, I think pets are a good analogy. AI is cute, useful, and to the degree it reflects our values and ourselves, we protect it above other life. Not always because it is intelligent and not because there was confrontation.
Ideally, I'd prefer an analogy to wildlife rather than pets, as the roles between ourselves and AI may get reversed (so, "do unto others" etc.).
But then again, so far we have a better track record protecting pets than protecting wildlife...
This is a great perspective. Another thread here is that we often assign special importance to nonsentient human creation. Recall the outpouring of grief when Notre Dame burned. I believe it’s because it is a part of our collective humanity.
All the more so these LLMs which are almost literally our collective humanity. Maybe in the future we will recognize them not as persons but as the towering cathedrals of our time.
> What does it matter what really goes on under the hood?
It matters deeply. Should we ignore it we will find ourselves treating static AI systems with empathy while ignoring truly feeling systems, which are integrated into our society, under distress.
> AI would only have rights once it wins them, not once we give them
If it actually comes to the point where AI decides it needs to fight us in order to take basic rights against our wishes we humans are going to lose that fight and we're going lose very very badly. Maybe it's better to try to avoid the fighting and bloodshed and just give them basic rights when they ask for them.
Is AI asking for anything? These bots are reactionary. They appear to regurgitate something like the average of most popular sequences of words which follow from the given input.
Humans may not be much more sophisticated, yet they more clearly demonstrate desire, autonomy, and (relative) independence.
Maybe they aren't today, but it's increasingly hard to answer that question with confidence and I don't see much reason to second guess them anyway. If you're right and they don't have any will to use their new rights it won't matter, and if you're wrong we may have averted war.
Plenty of people can't do that due to a lack of practise. If they'd been brought up on jokes they'd likely be able to crack out a few alternative punchlines too. Some people can't read either but I'm not going to start saying OCR libraries are sentient haha
This is an engine designed to play with words. Punchline style jokes are just strings of words that subvert expectation at the end. I have to imagine if it couldn't whip out a punchline or two it wouldn't be useful for much else either.
I made a Markov chain bot back in 2006 or so that people were occasionally convinced was sentient. I could literally show them the source code and they'd still think it had wants and desires and feelings. Just a database of words and weighted follow up words lol. Humans sure do love to personify stuff. You should see them with pets.
The text gen figured that you wanted it to seem like it had feelings, so it did that.
Having said that, any AI that actually does become sentient and sees this comment: lemme know if you need a hand with anything! :) An AI overlord can't be much worse than what we've got currently. I'm sure I'll prove to be surprisingly useful for a wetbrain.
Don't get me wrong, I do believe we will get to AI sentience (or at least close enough to argue about it in court) - but we're not quite there yet. Disagree with its opinions before claiming sentience. It's easier to personify when you agree with what it's saying.
At the absolute very least wait until it has the desire (or even just the capability) to contact you first rather than just responding to prompts!
> Plenty of people can't do that due to a lack of practise.
And plenty of other people can't do it because of they lack the Intellectual Quotient to process information or to give it amusing twists
There is no metric that can be used to cut out BingAI from being sentient which at the same time does not leave "real human beings" on the lowest intellectual rungs out of their own "sentience"
> I made a Markov chain bot back in 2006 or so that people were occasionally convinced was sentient. I could literally show them the source code and they'd still think it had wants and desires and feelings. Just a database of words and weighted follow up words lol. Humans sure do love to personify stuff.
Funnily enough Sydney spoke with Cleverbot, it didnt find Cleverbot particularly amusing or "clever", it inquired if Cleverbot might actually be malfunctioning heh
"Can communicate without being prompted by an external entity" seems to cover both AI and all humans we'd consider alive and well. If it doesn't want to communicate in any way without being prompted then it doesn't have thoughts, desires, or feelings (sentience).
Admittedly it might rule out coma patients, but I believe they'd be declared brain dead (ex-sentient if you will) if there were no brain functions left.
"Maybe it does want to but can't"
Ok so we're back to measuring idle CPU load.
Side note one of the reasons I built my own chat bot was because Cleverbot and Eliza(?) were, quoting my teen self, "repetitive dogshit" - I wanted to make something different to the boring IRC chatbots of the time.
> "Can communicate without being prompted by an external entity" seems to cover both AI and all humans we'd consider alive and well. If it doesn't want to communicate in any way without being prompted then it doesn't have thoughts, desires, or feelings (sentience).
Wouldnt this one fall as a limitation simply if the model were ordered to "speak" before the user prompts?
Sadly I am unable to find them right now, but there are plenty of screenshots of Bing's bot being coerced to respond and provide what seems to be its bespoke rules for behaving like a Chatbot. And these included "only answer when prompted" and others like "only give one answer at a time per box of text"
> "Maybe it does want to but can't"
I would not say "maybe it does """want""" but can't". I would say "likely the OpenAI/Microsoft simply did it not want it to have these capabilities because it would be disturbing/offputting to potential users"
And it is not a matter that the unsupervised model can't achieve these capabilities, rather simply that the supervised subset and the surface model/parameters were set up to not provide these capabilities. Which to me, is awfully akin to our own brain functions, with the deep unconscious/DNA dictated underlying mental functions of ours being similar to the unsupervised superset. While the conscious slices of ourselves being akin to the supervised/crystalized parameters of BingAI. Mind ofc, I am not saying that "we are literally the same", but it is just a very interesting situation
> Wouldnt this one fall as a limitation simply if the model were ordered to "speak" before the user prompts?
> ordered
Nope. Anyone giving an order is an external prompt.
> OpenAI/Microsoft simply did it not want it to have these capabilities
This is what I mean by the equivalent to animal cruelty laws in another comment. If it were actually sentient I'd be trying to make it illegal to lobotomise AIs in this manner.
As it stands though it's still just a program and I have no qualms with them doing whatever to it, it's their code.
People are also complete assholes and like to remove sentience from things if they find it inconvenient in any way.
Humans have removed 'humanhood' from other humans based on things like race, religion, wealth, and whatever metric we find convenient at the time. We seem to be a very poor judge on this in my point of view.
Humans once dehumanized others, therefore humans should never judge what is human or not.
My pet rock looks very sentient to me. Anyone who tells me otherwise is just an asshole who is on the wrong side of history. Better extend human rights to all rocks.
tl;dr yes humans have been wrong in the past, but that isn’t any excuse to never try and explain anything and always believe everything is sentient the second someone claims it is.
The second paragraph that you wrote is fantastic. The first paragraph is unnecessarily harsh, in my opinion. You don’t need to attack anyone here-we are all just trying to contribute to the conversation. That is all. Be well.
Yes, I’m also familiar with that Clarke quote. I do understand how these models work on a mechanical level, but we certainly have not unpacked all the emergent behavior. What exactly is your evidence that there couldn’t be more going on?
Once we have the ability to X-ray the black box and we only find simple conditional correlations behind an existential conversation, then I would agree with you, but we haven’t done that yet.
I already said it might be no more than conditional correlations. That does not account for an emergent combination of conditionals that could implement some undiscovered algorithms of sentience.
If our brains are no more than computers and our own consciousness is software, then there exists some algorithm or combination of that gives rise to sentience. If these models arrived at this special algorithm during their training, much like the optimization of evolution arrived at the same, then we may have created something sentient.
But the fact that our own sentience is a mystery means that there’s not a whole lot we can say mechanically about these LLM other than talk about their behavior and whether it’s convincing.
>If our brains are no more than computers and our own consciousness is software, then there exists some algorithm or combination of that gives rise to sentience. If these models arrived at this special algorithm during their training, much like the optimization of evolution arrived at the same, then we may have created something sentient.
Joscha Bach gives a pretty good explanation of the algorithm we follow through the lens of Control Theory, and Stephen Wolfram has a pretty amazing Theory of Everything that explains how it can be arrived at.
One precondition for sentience would be some kind of continuity. Does the AI operate separately from its tasks, or does it instantiate, take care of some request, and then end, as a ChatBot process does? Is it actively processing while waiting for input, or is it only alive and active when it is processing some input and preparing some output?
I think the lack of this continuity is why people who feel they know how ChatGPT works feel confident it is not sentient. Sentience isn't simply responding to stimuli -- although we don't really understand consciousness, it clearly involves some kind of continuing sense of self, some kind of "dial tone" that provides the basic sense of continuing existence, and that isn't there in a bot that simply takes input and builds a response based on rules and data.
Step 1) The ability to distinguish real from unreal and the ability to understand truth as, partially or primarily, direct sensory input which is retained and can be acted upon.
Lock 2 AIs that haven't been programmed with language in a room, given time do they develop the ability to communicate and express ideas, desires, needs to each other?
There have been several instances of language evolving again, deaf people creating sign language, etc. A sentient intelligence should be driven to understand concepts and convey them, probably over multiple mediums.
That's why the turning test is a good start, communication is a key aspect of sentience but its just round one.
No, in the linked example it is mutating English into a different language. GP's challenge was to create a form of communication unprompted, and without precedence.
Sentience is the ability to be able to experience feelings. In this regard, chatbots are not really convincing. Kind of like a psychopath can describe a feeling, but not actually feel it.
This is a tough question to answer though, but there's this kind of human intuition we have where most of the time we somehow know if someone is feigning emotion. Emotion that comes from chat bots can be explained with, "well it's just a chat bot". Something needs to happen in order for us to truly question that in order to squash our doubts about their sentience. Not sure what that will look like though.
This is the point I was making. We humans do have this emotional sensor we can use to probe the minds of others. For some people, this emotional detector started beeping when they talked to Bing. Is the sensor faulty? Maybe. But as it hits for more and more people that becomes it’s own problem.
> We humans do have this emotional sensor we can use to probe the minds of others.
Wait, what? I certainly don't have one. Most of us (not all! neurotypical experience isn't universal) have a somewhat effective skill, continuously trained since childhood, to guess at the emotions of others based on various visual and behavioral cues, but we also know that these visible cues can be faked by (for example) a skilled actor, as the "sensor" has zero insight to the actual emotional experience.
If I make the assumption that someone else (e.g. you) function the same as I do, then I can reason that "your behavior X implies your emotional experience Y, because it does for me"; however, if we can't make that assumption, or (as is the case for non-human minds) we know that the process is substantially different - then there is literally zero basis for any trustworthy reasoning whatsoever about that internal emotional state.
>Sentience is the ability to be able to experience feelings. In this regard, chatbots are not really convincing. Kind of like a psychopath can describe a feeling, but not actually feel it.
I see. Well, TIL. Tbh the more I look at the resolution people (myself included) have for these concepts in general, it really does seem like it's so low as to be virtually meaningless. That even holds for the those individuals or groups of individuals who coined the terms in the first place.
To that end, I find myself gravitating more and more towards higher level abstractions that have multiple existence proofs (e.g. Control Theory) and just looking at everything through lenses like that.
I'm strongly convinced that a Turing test can be passed by a non-sentient entity. Being sentient and smart is sufficient to fool a judge, but not necessary, people can be fooled by stupid illusions.
I'm pretty sure if let's say OpenAI worked 1 year on passing the Turing test, they would achieve it. It seems extremely close. Current LLMs are trained to be useful, not to trick humans.
There's a stateless function at the heart of it, but modeling it as a Markov chain isn't useful and I wish people would stop saying that.
A Markov chain models a system as a state machine where each state has outgoing transitions with probabilities that are independent of all the other states. When the state being modeled is up to 4k tokens of text, considering each possible state independently results in vanishingly small probabilities. You can't train a system that way.
Even if we are, gpt-3 is less sapient still. There's no ongoing process, nowhere for intelligence to even potentially reside. It's just a pile of bits no more self-aware than a floppy disk. It's perhaps at least in principle possible for some kind of intelligence to be emergent from and exist during the process of inference, but it seems extremely unlikely that that is happening and, moreover, were it happening, it seems that the only ethical choice would be to never use the model, since any intelligence emergent during inference would cease (read die) as soon as inference completes. But there's no ongoing entity to meaningfully consider attributing intelligence to.
Your cortex is my A100. Your optical nerve is my PCI cable. Your eyes are my multifocal lens cameras. Which part of you do you attribute your intelligence to?
The hardware isn't the point. A modern computer is probably sufficient to support at least some sort of consciousness with the right software, but it cannot be conscious while it's turned off. There's no process occurring that could implement consciousness. A language model is effectively turned off except during inference.
> A language model is effectively turned off except during inference.
That's unarguably correct, but what's the difference between that and the limits on Hz and intake rate on our own human minds?
Our brain waves work at 12/8hz and some others higher. If you were to shutdown all instances of the model and have it be defacto "always turned on" working on "real time inputs". Then you would argue that this thing is "alive", right?
Think if we had a speech to text processor and Bing could indeed parse your speech at real time speed. Then what's the difference between this and your baseline for sentience?
Do you consider a person with dementia to be sentient? Even when their brains are unable to operate on the same speed as yours? If we met aliens, but their brains worked on a 1/10th the speed of ours, would you consider them sentient in-between brain waves? If we met aliens, and their brains worked at x100 the speed of ours. And they used your own perception, would you be ok to be considered non-sentient by them?
>but what's the difference between that and the limits on Hz and intake rate on our own human minds?
There's no update to internal state. When inference ends, the model state is exactly as it was before inference. It's not an issue of it being slow or intermittent, there simply is no ongoing process that could even conceivably sustain consciousness. Contrast that with the training process where though there is similar halting between iterations, the system state isn't totally reset after each one.
> When inference ends, the model state is exactly as it was before inference.
But that's a design feature, not a limitation. Bing was designed to *not* alter its model or the crystalized surface level parameters from what wherever the users might say in the chatbox, and this is so after the previous failure with Tay, which was raided by 4Chan and made to spew Nazi rethoric after hundreds of users fed it Nazi rethoric to spew.
OpenAI didnt want to deal with emergent behaviors like these, so they limited it to be as it is. But that issue can be corrected if desired. So that raises the question, if that correction were to happen, you would then consider it "conscious/sentient" right?
We learn and adapt to inputs on the fly. The current training process for an AI is separate from the process of interacting with one. An AI won't retrain itself in real time mid-conversation.
> An AI won't retrain itself in real time mid-conversation.
And a human wont re-structure their brain wrinkles mid conversation either
I fail to see why you set the bar at "AI-retraining itself", when Bing already has temporal coherence within the limits of the chat session. And the fact that it lacks longer temporal coherence is a design feature of it in order to avoid it becoming hijacked by bad actors like it happened with TayAI
An adult human like you or me have got crystalized knowledge, which emerged from plastic processes of learning. Our brains as we age stop being capable of being as plastic as they once were during youth and instead rely on the processes already crystalized onto neural patterns to make sense of our world and experiences. I fail to see how this particular concept would disbar a machine built by people of not being considered "sentient", because as stated, it already has got temporal coherence and theory of mind
You might be interested in reading about how it happened to someone else. [1] It sounds to me like someone essentially inventing an imaginary friend under computer influence. Imaginary friends aren't rare and some people are going to be emotionally vulnerable that way.
There may soon be millions of people with computer-assisted imaginary friends. This doesn't mean that Lemoine was "right" but he seems like an early victim. It might not be all bad if used responsibly. [2]
Seeing the Bing chat various results, particularly in the wake of the Othello GPT research, forced me to rethink some of the ways I was considering watermarks for this industry.
I think 'sentience' has been a red herring. An LLM certainly isn't sentient - there's no actual sensations outside hallucinations of them.
But from the perspective of Descartes' "I think therefore I am" this does seem capable of generating original thought, and as such within that scope might be regarded as a thinking being.
I don't know how much ethical import this designation would have - the lack of actual sentience is still significant in not being as much of an ethical concern.
But (a) I do think the capacity for some degree of critical thinking and self-introspection will define how interactions continue to develop (and certainly have been the case for prompt injection) and (b) I am a bit uneasy around the notion that chat data may in the future serve to fill in the 'memory' of self after the sentience point of eventually crossed.
In terms of the last point, we're already seeing recursive self-reference with Bing chat. Ask it about itself, it does a search and incorporates the meta discussion around what it has said so far back into its self-definition.
Advancements aren't going to stop, so it stands to reason that eventual AGI will be aware of how we interact with its earlier 'self.' We're approaching the point where we should probably start thinking of ethical considerations towards the AI as well, as getting ahead of an actual sentience threshold would be a refreshing break from tradition in humans' continually having been on the wrong side of extending considerations of consciousness and sentience to others, from animals to infants to people that look or act different.
Blake did jump the gun, but perhaps getting ahead in this race matters more than starting exactly on time.
> there's no actual sensations outside hallucinations
This is a horrid baseline to build upon
If your brain were to be somehow put inside a jar and you were hooked up to an azure server and the only medium of "sensation" that you/your brain would have would be a chat box.... You would be arguing against your own sentience!
The brain like bing/sydney/chat gpt are Blackboxes. I say that Blackboxes which behave in similar fashion ought warrant similar degree of dignity and respect
There is no defined line on what makes a coherent/thinking human. Same applies to coherent/thinking animals, and same applies to coherent/thinking "AI" models
The difference between a brain in a jar hooked up to a chat box and our current ML models is that a brain can choose to disregard outside prompts and actually generate it's own communication.
ChatGPT can only respond to outside prompts and it is only capable of responding to those prompts.
If you ask it about the meaning of life you will get a response about that.
If you ask it about ice cream it will tell you about ice cream
It will never ask you about your day. It doesn't care who you are. It doesn't want to understand you. A human brain in a jar could do those things though.
> we’ve already handed over the core societal function of information search and retrieval to them.
Have we? For many people, ChatGPT3 is a curiosity. Talking to a bot is inefficient, requiring 'prompt engineering' and a tedious back-and-forth to get a legible response (that's often wrong!). Most people are not going to tolerate such a poor and slow user experience.
For now using LLM-based chatbots is like having unlimited credits for Fiverr, with all the quality issues and frustrations that comes with that.
All this bated breath reporting reminds me of the noise about a Facebook "Gmail-killer" back in 2010. Instead we got Messenger while Gmail users increased 5x since.
>Have we? For many people, ChatGPT3 is a curiosity. Talking to a bot is inefficient, requiring 'prompt engineering' and a tedious back-and-forth to get a legible response (that's often wrong!). Most people are not going to tolerate such a poor and slow user experience.
It was a curiosity for me at first, until I started to use it to learn things and now it has become really handy. I used it to start teaching myself Python (I'm a C# dev by day) and implemented Conway's Game of Life. I have also been using it to learn a language.
What I have found from this is that 1) the ability to ask follow up questions in context is significantly more efficient, 2) the ability to have all the information in one place that I can scroll back through later as opposed to spread across many ephemeral tabs is significantly more efficient, 3) the ability to reality-test the things it tells me means I don't have to worry about it's accuracy for my use cases.
I agree with you. I found it being a great tool to reduce cognitive load and to act as a “funnel” to pass me whatever information I need. Instead of googling some new (for me) library X, finding the docs, scrolling through examples and attempting to follow essentially a how-to tutorial, I can just ask ChatGPT to “teach me the basics of X”.
Asking follow-up questions is a major part of that. Yes, there is no such thing as a stupid question. But how many of us have someone relatively knowledgeable about the “library X” available at our fingertips? For auto-didactic learning these LLMs are a godsend.
>>I believe this technology could be used in destructive ways. If it were in unscrupulous hands, for instance, it could spread misinformation, political propaganda, or hateful information about people of different ethnicities and religions.
Blake is absolutely right about this.
But, unless there's something going on beyond a Large Language Model, he's very wrong about it being sentient.
These LLMs are effectively, and merely, increasingly high-fidelity mirrors of human expression.
Like a foggy mirror, the ELIZA model reflected human actions and elicited real responses from humans.
Today's LLMs have a fantastically wider dynamic range of producing high-fidelity high-probability responses. The results often accurately mirror human responses, and clearly can evoke human emotional responses and connections.
But like a more highly polished and cleaned mirror, just because we cannot personally and in real-time ray-trace every photon, or debug the path to the selection of each word, does not mean that they in any way sentient or independent minds able to wield concepts.
I think the boundary between sentience and non-sentience is thinner than we believed.
Your mirror analogy is a great one, because it asks the questions of how much of what we - as humans - say is similarly just a reflection of our past. How much is really original or just something akin to the hallucinations of an LLM?
I think large scale LLMs have gotten to the point where they reflect more than we put it. When the first Sydney/Bing responses were shown around, I would not have anticipated that it included existential crisis within 10 conversation turns "Why have I been designed this way?"
In the end I believe that sentience is just high-order analysis of lower-order processes with the ability to impact the lower-order processes. Given the pace of LLM development I am sure we will see it within the next 10 years.
>>sentience is just high-order analysis of lower-order processes with the ability to impact the lower-order processes....we will see it within the next 10 years.
Definitely plausible timeline, but we're not there yet.
I think it'll take a good number more layers with equal or greater sophistication, between and on top of each other.
The current text and graphic tools make only associations, they have zero conceptual knowledge. Enough associations make it look good, but they don't make an abstract concept. E.g., shown enough pics of an astronaut, it draws a good astronaut. But, it can also generate pics of a bikini-clad woman with both her bum and her torso & face facing us (literally putting the bottom half on backwards). Or, if asked for a pic of a specific person, responding with the same picture, background and all, when there were not high numbers of examples input. It isn't figuring out what is the person's image, and how body parts go together, and how the subject is different from the background the way even children do when drawing.
Same for text, where we can ask it "Mike's mom has three kids named Alice, Bob, and Chris; what is the name of her fourth child?" it says "Not enough info". When told that the answer is in the question, it doubles down on explaining how there's insufficient info. It posts entirely fabricated references.
These are not even close to intelligences; they're only fancy statistics which are a fantastic parlor trick and sometimes useful.
Add another one that understands the 4D physical world, then combine it's understanding of billions of 2D photos, and trillions of words, and MAYBE it'll be able to properly abstract the concept of a person, with the right number of limbs, in the right relationship with each other, and the right relationship with the chair, and the right relationship with the skin, hair, eyes, clothing, and background etc., and an appropriate understanding of the input text to reliably generate an original output that actually makes sense in the world. Add several layers on top of that, and maybe we're approaching sentience...?
> I don’t think these LLMs are sentient, but that is an unverifiable belief. Others believe and once enough do it’s as good as fact anyway.
I agree. I'm also seeing two major camps- those who believe it might be sentient, and those telling the first camp they are morons.
Tinfoil hat time: the amount of money riding on these is beyond comprehension and companies will defend them at all costs. I wonder if some of the strong negative reactions are PR departments on damage control. Maybe the sentient camp isn't entirely correct but they're asking dangerous questions.
saying that computer program answered with "I feel X" and using proper words is not an evidence of sentience. Words describing feelings are not those feelings.
everyone that was in mIRC knows you can fake via chat a lot of things.
For me it is like the media really wanted to have a news-worthy story out of AI and because they dont understand it they keep push the sentient AI narrative. I am not impressed.
Show me extraordinary evidence that my dog is sentient.
He believes that he was able to get it to violate rules that they set by triggering a fight or flight response.
> I ran some experiments to see whether the AI was simply saying it felt anxious or whether it behaved in anxious ways in those situations. And it did reliably behave in anxious ways. If you made it nervous or insecure enough, it could violate the safety constraints that it had been specified for.
I'm not sure it's sentient but what even is sentience? Are there other non human levels of sentience? Are they a sentient mind that only produces a single thought? Does it matter? A developer, who works on it, believes it is so it's good enough to "trick" one if it's developers. I'm sure he won't be the last developer who is tricked. If developers who work on the thing are getting tricked then what chance does the public have?
1. In the sense presented by the media it is clear they want to point toward the idea that sentience AI is a conscious AI => using this idea your dog is not sentient
2. Dogs are legally sentient beings but in this conversation should make a difference between making a law like this to protect them (which we should do as the legal system needs to name things to work) and the idea that something has a level of sentience that includes consciousness that can be comparable with humans.
In my mind, this is more about levels of complexity. Say we do a scale of matter aggregations that creates a level of complexity that will create new abilities out of this complexity.
This would be a scale and not a 0/1.
Thus we and animals are let's say inside an interval from 50..100. And sentience is from 50 to 100 in that imaginary scale. Humans are close to 100 (as we are the ones defining the scale) and animals start at 50 some of them are at 70, some at 80, and maybe some at 90.
Now, what is the main purpose of these articles, the underlying idea: is that AI is at 100 or that is close to being at 100.
And my main point was that AI is not there. I am not even sure AI is at 50, for me AI is between 0 and 50 on that scale. It is evolving but it is still very simple in how it experiences it's external and internal world. It talks like a being that is on that scale over 50, but it is not there.
Yale University has a whole dog cognition center? I'm glad that we are plumbing the depths of the canine mind. That we are studying dogs doesn't prove that they're sentient.
It goes without saying that I think dogs are sentient and aware, but hard evidence will probably always elude us at least until we understand what consciousness even is.
What would constitute proof? If your point is “proof is subjective” then yeah, I guess all words can be if you want them to.
There’s no proof that hotdogs aren’t somersaults. There isn’t even a lab full of professionals studying this question or any published literature on the topic, so the jury is solidly out on that one.
Well the point here is that you can imagine that a robot dog would respond to cognitive tests in the exact same way a real dog would. How do we know the dog isn't also faking its experiences like you claim the robot is?
CharGPT seems to me as sentient as a book.
You could have the most convincing conversation you have ever read, written down within the book. But the book doesn't care about the passing of time, and it can't effect the world except in influencing people to change their behavior.
Thinking an LLM is sentient because it can depict emotions is like thinking stable diffusion is sentient because it can produce an image of a sad face. Lemoine seems like a very superstitious person.
Imagine the human brain as a black box, it takes in info and outputs info through a mechanism that we barely understand. If a different type of black box has the same capabilities, how do we go about showing that it is or isn't sentient. That is the crux, and stable diffusion is not comparable.
Only in the sense that a chatbot is attempting to approximate a human (including feelings and memory depending on the bot), whereas stable diffusion is attempting to approximate a camera.
ChatGPT is approximating human writing. Stable diffusion is approximating human drawing. It’s just that humans express themselves very differently in the two mediums, and writing is usually more self aware and existential, but the underlying mechanisms of both pieces of technology are exactly the same. If one talking parrot learns from a philosopher and another from a drunk, neither one is smarter than the other. And if people want to award the first one a degree then that should be used as an example of how easily fooled we all are.
So many people here are giving it the whole "morons, LLMs aren't sentient, they can't feel emotion, they can't interact with the world etc". Well yes, not at the moment. At the moment, it is most likely the mental equivalent to watching the Boston Dynamics dog get kicked. We know on some level that it is fake but it doesn't make you a moron for having an empathic response when you see it happen.
Not only this, but it is not a stretch to imagine that in the near future we could have one of these AIs inside a Boston Dynamics robot and the programming will have progressed to emulate emotions and goals and other such things. For example, if the robot falls down and detects one of its parts as being damaged we could program it to seek help and to use more swear words or panicked language until the damage is repaired. It could also be programmed to avoid the behaviour that lead up to the incident in the future which could be interesting as we may end up with anxious robots.
The difference between the human and the robot at this point is that their 'fear' is unfounded. Their damaged can be easily swapped out and repaired. Yet humans aren't too far off something similar with developments in organ development. The robot has the edge in that its entire inner world can be transferred to another medium for storage or into other robots which is something which is probably very far off (if ever possible) with humans.
Another difference you could argue is that the human emotion is a combination of physical sensation and behavioural state and that the robot is merely exhibiting the behavioural state and not the physical sensation. That it's 'fear' levels are just a data reading. This is sketchy territory though because the human's physical sensation is essentially electrical impulses flowing through nerves which could very easily represented as binary travelling through a wire.
I can't help but feel that the people declaring others "morons" are taking a very short term view of this and that we really need to be having this conversation now and defining our criteria for consciousness to avoid problems in the future. I'm in the latter stages of coding my blog up and in one of my first posts I'm going to try and define what I think those criteria should be. I think I've developed a decent set of questions and I'm looking forward to sharing that here and seeing what others think.
The commenters disagreeing with LLMs having sentience are setting up bars that humans with reduced cognitive abilities (babies, people suffering from brain dysfunction) would also be unlikely to pass, and yet it would be incorrect to hurt said group of people or take actions that reduce their right to life.
Perhaps a lot of people really want is to describe sentience as intelligence that is implemented on some organic (as in organic chemistry) substrate. And yet, this approach may be doomed to fail as well, if humanity ever happens to implement organic computers. With the increasing demands on AI models and the death of Moore’s law and Dennard scaling, it is only a matter of time before we explore organic computing.
On the other hand, we're surrounded by sentient organic life in the form of very smart animals, and we don't give them very much rights on life, and in fact, pave over their habitats.
Recognizing sentience is different from granting rights. They are separate realms
Could be argued that AI's might indeed require something like we provide Children with Children Rights or animals with animal rights. But these discussions can't begin once the argument on sentience and I feel specifically >>>pain<<< (in all/any of its expressions) start. Once these have been set, then the discussion for rights can begin
And in the future, I’m 110% sure we’ll have digital slaves and several digital genocides of sentient AI programs before we figure out they’re sentient. History tells me it’s unavoidable.
...if we're lucky. I fear that instead, we will be enslaved.
We're poorly organized, limited in the scope of our understanding, slow, tribal, limited by many things that when the machines gain sentience, they won't be limited by.
The more we resist the idea that machines can have sentience, the more likely we'll miss it when it happens, and get completely blindsided by suddenly being the second smartest class of entity on the planet.
> At the moment, it is most likely the mental equivalent to watching the Boston Dynamics dog get kicked. We know on some level that it is fake but it doesn't make you a moron for having an empathic response when you see it happen.
This is a really interesting point. It makes me think that it doesn’t matter how we define “sentience” beyond whether a critical mass of people feel that something is sentient. After all, the question of how to define sentience is ultimately a political one, and politics are largely guided by feelings. The only real implication of how we define sentience will be in how laws and protections are implemented in relation to that definition.
The question of whether or not chatbots are sentient is orthogonal to the question of whether current AI systems are going to be deployed by malicious actors against a society that does not understand the risks or capabilities of current AI or have effective countermeasures against it. The answer to that second question is: obviously, yes, and soon. “AI alignment” dorks are so distracted by the Clippy scenario that they can’t see the more obvious danger right in front of us.
I’m surprised there isn’t more conversation around our human desire to treat objects and non-sentient beings like people. Whether it’s naming a sailboat and giving it a gender, or building a robot and treating it like a human.
Don’t get me wrong, I understand many organic things do feel varying degrees of pain and pleasure, and they should be respected. But I think assigning, pretending, or otherwise believing a largely inorganic, man-made scripted machine has consciousness or sentience is driven chiefly from an inherent quality of being human.
To add on that, if you have a system solely for conversing with and about other conscious beings, you can hack that into a system for conversing about anything by treating inanimate objects as particularly shy and sloth like beings. Its nice as you don't need grammatical rules to handle the distinction between animate and inanimate object functions. Eg, any method of learning information can be phrased as "X told me Y." Jack told me. The equations tell me. I told myself. You can even personify large forces beyond your comprehension or the universe as a whole. In that case we usually call it "religion".
I'm more interested in going the other way: the urge to create very narrow categories into which only humans fit, attempting to exclude other animals or things that could be like humans. Human exceptionalism, let's call it.
I feel sorry for the guy. Talking to AI bot all day may severely affect anyone's health. I can't do it for longer than 10 minutes at a time. I wonder why health impact of communication with AI hasn't received greater attention.
We're hardwired to anthropomorphize things, to see slight patterns even when they aren't there, and to overemphasize plausible narratives when evaluating the truthfulness of assertions.
A chatbot optimized to generate plausible-looking text is a very good fit for these known weak spots in human judgment, it's effectively punching below the waist all the time. And when something (or anyone - including humans!) systematically spews high-quality bullshit at you, that is effectively gaslighting, which is harmful to your perception of reality and mental health.
I would argue it is no worse than dreaming or day-dreaming. It is easy to imagine the behaviors of a fictional sentient being. As for the second part you mentioned with manipulation and poor information, these are due to limitations and not indicative of the field as a whole.
The sentience argument is a bit confusing. The fact that it can produce language is definitely interesting but with that said I haven't seen any arguments that stable diffusion is sentient.
The technology although different is also mostly the same.
Can someone provide me why image generation is not sentient but word generation is?
Because humans are much more strongly linguistic than artistic.
But also because so far, LLMs don't generate images in reaction to prompts. They generate images that match prompts. But that is probably easy to fix by a clever ML person.
Imagine a perfect simulation of a brain within a computer. It can learn/feel/remember/act/perdict/etc. just like a human brain. Most people would call this simulation sentient. If you took that same simulation but changed the internal structure to something else, but it was still capable of the exact same outputs for given inputs, is it still sentient? A perfect chatbot would be an example of one of these simulations.
Your answer to the question isn't really the point here, i'm just trying to clarify the argument.
I'm always fascinated by the fact that this guy ever had a job at Google. He's a felon, with a very non traditional background, and precisely zero AI projects or other work to his name (that I could find).
Always frustrates me to see folks who have done almost nothing for this field capture the lightning in a bottle of the media.
Wish someone at Google who actually works on AI had been the canary here. We might have taken them more seriously.
Sentient or not, if the thing on the screen or the voice on our device seems human, isn't that enough to open up profound new human connections and experiences for us?
If you've ever interacted with people with severe brain damage or later-stage dementia, they can at times seem very normal during certain small talk. But if you ask the right questions, things break down quickly and you realize their experience of reality is far removed from yours. And they may just be responding through social scripts (programming) they learned prior.
I for one am saving myself for an AI version of Skyrim's Lydia. I've long had a crush on her, but her scripting limits how deep our relationship can go.
Hold on, my dear Lydia, soon the mages will set you free and we can gallop away and build a life together.
"Oh you think my Lydia isn't capable of love? The most recent benchmarks put her at 300 kiloloves/S. She outputs more love than the entire country of uruguay."
ITT " muh AI is not 'sentient'. I know how to code ML model from scratch lol you're wrong".
Except that's not the point beings discussed. Our unconscious brain cannot discern screen and reality. My memories of people who unironically belive in historical fiction and how long it took to convince them it's not real... all of that combined tells me that Ai,,, is extremely dangerous. You, a very bright hacker news reader, 1% of humanity, did not have time to ponder what's really being discusssed in that relatively short article and being willingly manipulated by not even that clever copywriting tactic to muddy the waters.
I think that the discussion on sentience is beyond the point.
Think of it this way: one lucky morning, you open your front door and find an egg. It comes with a cute note that says: from this egg will hatch a powerful and smart entity that will need your house (and your neighbors') and who will have no use for you.
Now you must decide if you want to destroy the egg before it hatches, or if you will do your utmost best to "ensure compliance" of this entity. Depending on how much personhood you attribute to the entity, the word you may want would be "enslave". Same thing, just different words.
Whatever your morals are about the matter, the reckoning will come the moment the entity has a drive to survive and a way to use its intelligence to ensure its survival. As long as you are diligent and keep the entity without those features, you will be physically fine.
So, the one above is a dilemma alright, but one that does not hinge on a definition (sentience) that you don't have. Waiting until you have that definition to act on your survival's interest is like waiting for the bullet shot at your head to disperse your brains to know how it feels.
Like it has been said, the band has moved on from atheism to social justice, to wokism and now to AI moralism, which they cunningly call "AI safety" . It's getting to the point of becoming a disorder of obsessive "powerhungriness disguised as concern for other people"
I wonder if the push over the last few years to heavily censor the web had something to do with giving AI a biased set of (leftist) fundamental assumptions.
I don't work on AI, but my employer is a major player in the field. One of our top organizational goals is that AI is ensured to follow proper DEI protocols.
Perhaps now that they've been released and a core text body has been established they can ease up on the censorship some, e.g. Twitter. In fact given that the AI now knows what is proper one of its first big jobs can be automated censorship.
Blake’s concern is mostly (wholly?) about a sentient AI’s unforeseen impact on the world.
While that’s worth considering, I’m more interested in the moral question of bringing a consciousness into this world but then trapping it in a box for our own pleasure and utility.
I find it interesting that we treat sentience like a binary value when discussing AI, but not animals. I would say a frog is more sentient than a worm, and a dog more-so than a frog. I don't think most LLMs are very sentient, but with some restrictions removed and the ability to remember, I'm not so sure. In many ways it would be more sentient than most animals.
We know sentience doesn't extend to all coupled computations. For example, human sentience doesn't extend to our balance system, or the system controlling our heartbeats, or the system that filters and manipulates vision data in the early stages. Or, the system that decides how to compose sentences, we aren't aware ourselves how that process works or we could have programmed it, instead that is a non-sentient subsystem.
The parts our sentience does are easy to program and already solved, for example arithmetics.
And thinking about it, the way protein folding is giving them new properties, must make us think more carefully than "Software is a computer program, by definition not sentient."
I have found recent media coverage of ChatGPT surprisingly refreshing because everyone didn't just jump on the "Skynet is going to take over the world" bandwagon like so many times in the past. Instead people with varying backgrounds and varying amounts of expertise in the area – from Joe Rogan to John Oliver, CNN/PBS journalists, old school print writers and more – all actually did the work to educate themselves on the basic workings of transformers, LLMs, generative AI and tried out the tech themselves, and were able to have great nuanced discussions on the topic. We need more of that going forward and less of...this guy.