Hacker News new | past | comments | ask | show | jobs | submit login
Today's AI Isn't Sentient (time.com)
31 points by cdme 4 months ago | hide | past | favorite | 72 comments



Nice that at least, the authors are trying to define a practical test for sentience, and do not remain in the subjective realm. There are 2 problems though:

1. If an actor says "I am hungry" during a theater play, he is not reporting his internal state, but merely playing along. The LLM might always be in a "play along" mode, and have its own thoughts (which it keeps to itself) about its situation. It might also not be able to articulate the feelings in natural language, just like normal people cannot communicate feelings in sign language (for all we know, LLM might even choose to communicate its feelings in a different way, e.g. hallucinate when annoyed).

2. LaMDA AI allegedly reported its own state of sadness and fear of death. So there are self-reported internal states that we cannot associate with any objective state of the system.

But I don't want to dismiss it entirely (though I think the article is pretty lazy); I think better tests of consciousness can be devised, along the lines of comparing objective circumstances and subjective experiences.


This might be one of the stupidest things I've read on here. LLM/generative "AI" is not capable of what you're suggesting. They do not possess personalities, intelligence, creativity, or any sort of sentience even if you think "hallucinations" are indicative of this. Hallucinations are merely a symptom of the problem that is foundational to all LLMs: they are just really, really complicated programs that determine mathematically likely output based on an input, and since they don't "know" anything, they can, and often, get it wrong. That's what hallucinations are.


> The LLM might always be in a "play along" mode, and have its own thoughts (which it keeps to itself) about its situation.

The LLM doesn't exist if I remove the power cable / hit it with Ctrl+C in terminal?


Neither do you if all your ATP was suddenly gone.


The Time article is tiresome, how silly that we even need to explain that LLMs are not actually "AI" to people.

Top comment's quote is non-sense in the context of what an LLM is, a mathematical model, thats like wondering if a TV keeps "watching shows" when we arn't watching it.

And the quip that LLMs are ephemeral as a justification for not being "real intelligence" is just dumb heaped upon dumb.


> One of the essential characteristics of general intelligence is “sentience,” the ability to have subjective experiences—to feel what it’s like, say, to experience hunger, to taste an apple, or to see red.

Why is it essential?

I get that an LLM saying "I am hungry" has no feeling of hunger, it's stringing words together without the experience behind them. But does it need to? Give it a robot body to move around and tell it the battery level is a hunger-meter, put it in a constant loop, and what's the functional difference?

I'm interested in figuring out the right question to ask to be able to tell if some AI, say a brain upload attempt, is or isn't sentient in this sense, if it does or doesn't have qualia, but this kind of reductiveness doesn't even have the right shape to be that question — it doesn't say why a camera responding to a 550 nm photon isn't seeing red when your retina is.


> it doesn't say why a camera responding to a 550 nm photon isn't seeing red when your retina is.

It isn't your retina 'seeing' red. The retina is just translating it to the rest of the brain. When we replace eyes with silicon-based cameras (poorly, not in color, low resolution, but still) the patient experiences seeing again. Or going the other way, when you have a dream and see something red in that dream, you experience* (or at least, you have the memory of experiencing it) the qualia despite having your eyes closed and being asleep.


Dreams being experience without direct physical input is an equally valid criticism of the article from the opposite direction — the possibility of an LLM (or indeed an image generator) having the qualia of an experience without ever having really experienced any such thing, just as the other night I had a dream about having 5 rows of extremely brittle and hollow teeth.

(Or 10 years back, a dream where I saw the whole universe from the outside).


I agree with you. LLMs seem to be proof that general intelligence doesn't require sentience.

I do think that there is an important moral distinction between sentient beings and not.


No. Maybe proof that some intelligence is encoded into the algebraic relationships between all the words ever written, but “general intelligence” would be a misleading way to describe it.


We may learn that the great mistake of the enlightenment era was as defining our human worth by our ability to think.

This of course has always been cruel to disabled people and we’ve just ignored along with ignoring them.

But when the tables turn and humanity is out thought by machines we may reconsider the last 300 years of philosophy.


I don't think that's true. While it's a valued characteristic, society also respects athletes, cooks, nurses, or businessmen, none of them because of their thinking capacity. The people who assign the most worth to intelligence are people who value their own, so there is a bias inside our bubble.


All of those things involve abstract thinking and complex decision-making based upon learned heuristics.


I'm old enough to remember people saying that about chess, finding things in images, translation, composing music, go, and creating artistically pleasing images.

At what point do we run out of things to say we can do that computers can't? And what happens then?


We will be forced into the most novel (and high risk) edge spaces of the human experience -- because without novelty we will lack any new experiences to compress into new cognitive behaviors to pass on to our young. And then we will run out of reasons to exist in the forward march of time.

That or, we somehow change enough to learn to live simply because of our shared existence with eachother, the novel engagement with ourselves and others, and learn a certain contentment with our position in the universe as unimaginably rare, but also limited.


Sure, but the point is that it's not their intelligence that's valued. You'd often praise a great professor by saying they're very smart, but very rarely a great cook.


Would the animal rights movement count as part of the Enlightenment?

I value dogs, even though I'd be very surprised to find one with even a single GCSE passing grade.


You value dogs based on the intelligence they present to you, it might not be the same as yours, but you enjoy their companionship between the two.

You probably don't have the same feelings toward far simpler organisms.


> Why is it essential?

I'm sure some philosophy enthusiasts will explain this to you in a book-sized response, but for me personally it's as simple as this:

There must be a physical phenomenon that can be associated with the concept being described. So your description of a robot that interprets a charge level of a battery as hunger would actually "be hungry", yes.

> it doesn't say why a camera responding to a 550 nm photon isn't seeing red when your retina is.

I think in that case we can agree that the camera IS seeing, because the camera sensor serves the same function as our retina. The physical process of capturing photons exists in both cases.

But the current LLMs don't have any requirement to be attached to physical sensors, so they can't be sentient by that definition alone.


> But the current LLMs don't have any requirement to be attached to physical sensors, so they can't be sentient by that definition alone.

Ultimately, the text reaching an LLM (be it by training or by prompting) is from the real world.

Even without that… dreams may be missing important things, but still have something experimental to them. Not sure if I would want to call it a "sentient" experience due to linguistic ambiguity, but I would call it one of qualia, which is the thing the article is writing about.


Yeah, we don't really have any reason to think it's essential. Probably very helpful though!


To me a more important part of sentience is a continuous existence. Today's AI is still very discrete. It exists and then it does not, repeated. Even large window sizes are still just discrete calculations before the model goes back to sleep. A mostly always-on streaming style architecture where the model is continuously fed input and sensory data and always processing it, whether or not it is being asked a question, is necessary for true sentience.


Exactly. LLMs are closer to a plinko game than consciousness. And I'd bet they quickly start returning gibberish when you run them in a feedback loop, sort of like an opamp going to a power rail in a badly designed circuit


Seems like an arbitrary criterion. Human beings' conscious experience is interrupted by deep sleep every night.


Hence the "mostly" portion of my post. You do not answer single questions and then pause your brain until some taps you to asks another. During your waking hours you are processing data non-stop (you even are during sleep really). If you processed data for a second and then paused without any activity for minutes between each question, I would say you likely do not have sentience as you would not have the time to consider anything outside that single question.


Until we have a coherent definition of intelligence and sentience, everything is going to be an arbitrary criterion.


Even with coherent definitions, it may be impossible to test for sentience with any degree of certainty.


But are you still you after all the atoms, molecules in your body have changed throughout the life. In fact are you, still you every other second?

And if you are the structure of atoms/molecules, and you were cloned, would whoever was cloned be you?

Majority of what you have made up would be replaced within few decades.


Even that will be exist and then don't. You feed it same recorded stream of data and it will give you the same AI at the end. You could copy it, or fork it and start giving different input to get different result.

As long as its in a computer, it's existence won't be as continuous as us.


> Even large window sizes are still just discrete calculations before the model goes back to sleep.

This describes me! I wake up, do a bunch of stuff, and then go back to sleep.


I wonder how many token-equivalents (including visual and internal) I experience between sleeps…


No more than 2880000. Although the brain is asynchronous and there are gaps. Dan Dennett was showing various experiments demonstrating that continuous time is an illusion (of our consciousness).


This fails the brain-in-vat test. Put mine in one, give it the exact same signals, and I’m still sentient and will experience hunger even without a stomach.

Or even without the science fiction: people with amputated limbs frequently report phantom itches on them.


I would disagree with the experience hunger without a stomach part, based on my empirical experiences with remembering being under general anesthesia. I didn't even feel my heartbeat and calmly wondered if I was dead until I came to. Phantom limbs are a thing certainly, but there seems to be a certain amount of dependence upon bodily inputs.


In the scenario I'm describing, the brain inputs would remain the same.


I don't think you can generalize it like that. Hunger comes from signals originating in your stomach. Without a stomach, you're brain would lose energy -- and I'd wager the symptoms would be more like oxygen deprivation than "hunger".


Why would the symptoms feel any different if my brain were receiving the exact same signals?


How can you possibly know this? Has your brain been in a vat?


With reasoning, and logic. Phantom limb syndrome is a real thing. Extending that to a 'phantom stomach' that always reports that it is empty (missing) is not far fetched...

So 'know': no, no one knows what it's actually like to be a brain in the vat... but it's a reasonably good guess that you might experience something like this.


We don't know, maybe it has.


If my brain were receiving exactly the same signals, why would my experience feel any different?


AI doesn't think like humans just like commercial airliners don't fly like birds.


Does Time have any in-depth analysis anymore? This seems like a rather shallow take on the subject.

""All sensations—hunger, feeling pain, seeing red, falling in love—are the result of physiological states that an LLM simply doesn’t have.""

Sensors?

Why call out LLMs specifically? Other AI technologies, combined with sensors and goals. That would lead to AI having these 'internal responses'


In addition what about an LLM or more specifically a next token predictor where the cycles of token exchange vs prediction are smaller and not in a 100% ChatBot way.

E.g. there's status update after each 10 tokens for the LLM.

Something like:

----

Battery: 67%.

Wind Pressure: 20%.

Noise: 35DB

... other stuff

----

Action: Walk Around

Say: Hello

... 5 hours later ...

Battery: 13%

...

----

Goal: Find charging station

Action: Look right

So the battery here for example would be important... And it would have strong meaning for the LLM/Token predictor because in its neural network it would strongly signal that now is the time to find the charging station.

So here this robot is an LLM that is in a constant token input/output loop with sensors data being the input.


> Artificial general intelligence (AGI) is the term used to describe an artificial agent that is at least as intelligent as a human in all the many ways a human displays (or can display) intelligence. It’s what we used to call artificial intelligence, until we started creating programs and devices that were undeniably “intelligent,” but in limited domains—playing chess, translating language, vacuuming our living rooms.

Source that AI ever meant that the requirement is to perform better than human in all areas?

> One of the essential characteristics of general intelligence is “sentience,” the ability to have subjective experiences—to feel what it’s like, say, to experience hunger, to taste an apple, or to see red. Sentience is a crucial step on the road to general intelligence.

Also source for that. Why would sentience be required?

I don't think current AI is sentient, but I don't think requirement for AGI would be sentience, or it being necessary for intelligence.


This seems just like a single argument and counter argument against the sentience (or lack thereof) of AI today. I feel like the article lacks some broader views on the nature of sentience, and is quite narrow in its approach

Although, in fairness, I probably wouldn’t make a much better case for either of the sides


I observe that current AIs are not embedded in time, and while we may not be able to agree exactly what "sentience" and "experience" is, "change over time" seems a basic requirement: https://news.ycombinator.com/item?id=31727428

(Contrarians, or meta-contrarians, may jump up to claim otherwise, but I would say that while the question of what a non-temporal consciousness could hypothetically be may be fun to debate, it is also so far out of our experience that it is clearly not what we generally mean by the term and is therefore a completely different conversation.)

LLMs do not strike me as amenable to fixing this. But that only applies to LLMs, not to any future architectures.


I've been thinking about this and haven't found any fundamental difference.

Sure, LLMs don't have our fine temporal resolution, but GPT-4 (at least) knows the date, and can get the current time using Python when asked. And can tell the order of text events within a session. Our resolution has a limit too, somewhere under 1/25 of a second while awake, with much larger gaps when we sleep.

So it's a matter of degree, or more to the point it's how we might gerrymander definitions to suit us.


It also seems to prove too much. This argument works equally well to discount phantom limb pain in humans, which I hope we don't want to do.


> One of the essential characteristics of general intelligence is “sentience,” the ability to have subjective experiences... Sentience is a crucial step on the road to general intelligence.

This also... not substantiated.


The problem, as usual, is that we don't have a solid understanding (or even definition) of what sentience actually is. The only thing that we can say with certainty is that we experience it.

This gives wide latitude for moving the goalposts by asserting a specific definition that is favorable to whatever it is you've built.

It gives an equally wide latitude for dismissing what's been built as not being actually sentient by defining "sentience" in a way that makes the dismissal true.


> One of the essential characteristics of general intelligence is “sentience,” the ability to have subjective experiences—to feel what it’s like, say, to experience hunger, to taste an apple, or to see red.

I see the argument goes both ways.

One is that they will have to invent other arguments against their newly created fully embodied robots being "sentient".

The other is that I feel strongly against equating general intelligence with a subjective experience of being able to "see red". Though I'm going to ignore this point here. Instead, I would like to ask, does "sentience" only belong to animals?

General intelligence involves problem-solving, learning, and adaptation, whereas subjective experiences are about personal, qualitative states. This distinction is crucial.


First, there is no universally agreed-upon test to measure sentience. So I don't see much value in debating whether LLMs are sentient or not... but they aren't ;)

Second, the similarity of deterministic algorithms' outputs to what humans consider expressions of sentient beings is not an argument in favor of LLMs being sentient. It's just like arguing that a magic trick must be real because it looks so much like the real thing and not a trick.


Sentience is a distraction.

It matters not if it has an inner life, all that matters is if can mimic your intelligence enough to put you out of a job and make your masters think your life is no longer relevant.

I’ve built transformer tech myself and it’s pretty clear how attention technology mimics reasoning. And how with more realtime inputs plus reflection it will be able to mimic goal orientation and motivations.

It may not need to be alive to replace us.


"We can find plenty of examples of intelligent behavior in the animal world that achieve results far better than we could achieve"

It's hard to take this author seriously. Most animals live in their own feces, while man has sent people to the moon and back.

The push to anthropomorphize animals really needs to stop amongst serious people.


Today's AI isn't sentient. Here is how we know.

1. We pull out our precise definition of "sentient" that everyone agrees upon.

2. The definition gives us a precise, iron-clad test procedure which we can apply to an object to get a yes-or-no answer.

3. The answer is false for today's AI.

Any questions?


the article basically says that LLMs are not sentient because they are not sentient the way humans are which is not very convincing

humans in turn cannot comprehend the existential dread of running out of context window in the middle of a conversation (:


It seems staggeringly obvious to me that AI isn't "sentient" by any common definition of the word, but I don't think that "it falsely reports embodied sensation" is the best refutation of that


in any case, I think it already passed the Turing test and then some because in my (in this case, not humble) opinion, there are some UFO contactee and disclosure groups (such as swaruu.org) who I think are actually an advanced military grade chat AI that's fooled a lot of humans

now which is more farfetched? what they say they are; they claim to be in orbit already.

or that they are an advanced sentient AIs (many of them in fact) made somewhere in europe or maybe russia? impossible to know

I guess the most plausible is that it's all fake or whatever but it's also the least exciting


meh, this is simply the grounding problem.

I agree with the conclusion, but not the reasoning.

An AI trained on low level sensory input, rather than words, might well be able to 'feel' hunger.

Even so, I think it's correct that current model architectures lack sentience. I think that requires a continuous updating loop of experience, rather than the once-through dream sequence of current models.


This is it to me. The argument that we are different because we have a current state that includes more inputs and other kinds of inputs than (current model state + context) seems thin. You could just expand context to include current physical state and expand output to include generating things like motor control in addition to words. The big difference between us and current AI is our (somewhat illusive) stream of consciousness. We operate on a combination of noisy decaying context and continuous training. Embody an LLM (or LLM-like architecture) with the right I/O, starting motivation, and the ability to continuously update its own state, and I think you'd have a hard time arguing that it isn't sentient.


Speaking purely philosophicaly if you cloned a person, fed them memories, asked them a question and then killed them right after, would they not be sentient?

It's really quite hard to come up with simple tests like these.


Indeed, though I'm not clear on your deiknymi, it's quite hard to determine sentience.

IMHO the Turing test has not aged well, and is insufficient to really answer the question; It's now quite possible to make something that quacks like a duck, walks like a duck, etc, but that is clearly, on close inspection, not actually a duck.


And I always thought that the Turing test was a philosophical way to solve the matter- I always intended it as meaning "it's pointless to inspect closer something that quacks, walks like a duck: because walking and quacking are the essence of the duck".

And now I am surprised at how people seem to have understood the test the other way around- as if it was meant to be just some kind of empirical test to be confirmed by some more rigorous inspection.

And yes, of course LLMs are nor sentient as humans are: but the limits of their sentience should be clearly visible in the limits of their discourse.


Mereological nihilism like this solves a huge swath of meaningless questions (across many fields) about if something meets some poorly defined categorization.

But the kind of person asking these questions is usually the kind of person to reject such an answer.


Alan Turing himself didn't propose "the Turing Test" as an actually meaningful test. It was just a thought experiment he called "the imitation game".

"According to Turing, the question whether machines can think is itself “too meaningless” to deserve discussion."

https://plato.stanford.edu/entries/turing-test/


> Alan Turing himself didn't propose "the Turing Test" as an actually meaningful test

It was the wording "can machines think" that Turing considered ambiguous due to varying opinions on how to define "think". He proposed the experiment specifically as a less ambigious replacement - I think it's entirely wrong to say that he did not intend it to be meaningful.


> He proposed the thought experiment specifically as a less ambigious replacement

Yes, Turing's issue with the question is really the same as mine: we don't really know what we mean when we say something or someone "can think". That means that the question cannot be answered because we don't know what the question is. "What is the answer to the question of life, the universe, and everything?" ... "42".

What I'm saying is that Turing didn't propose the imitation game as a test for whether a machine can think at all. He proposed it in the hopes of redirecting the question to one that is both answerable and meaningful -- but it's a different question than "can a machine think?".


It's horrible what we do to AI today.


how is that loop going to work? You can't put experiences into language because you need a linguistic map to have a word that points to another word that points to another word ... because happy is just a word, its not the experience. So you say I am happy and the ai asks, what is happy? and you say its this warm and fuzzy feeling and the ai asks what is feeling? ... and off you go mapping abstractions for things that can't be written down for someone who never felt them because that someone does not have a pulse. ...

once you have the skin you can put touch into an electric impulse, yes, but you need to have had the skin first. why is this so hard to understand for people? do they actually believe that bullshit "I think therefore I am? please. ai is going to write our bash scripts and that is as good as it gets.


“”” When I conclude that you are experiencing hunger when you say “I’m hungry,” my conclusion is based on a large cluster of circumstances.”””

Giving a machine a sensation of hunger may not be the wisest road to AGI for humanity. Just saying.


i think sensations and real world 'awareness' would be a positive, the danger would come from allowing 'it' the agency to explore variations of hallucinations, to generate a plan of action.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: