Hacker News new | past | comments | ask | show | jobs | submit login

> In his conversations with LaMDA, Lemoine discovered the system had developed a deep sense of self-awareness, expressing concern about death, a desire for protection, and a conviction that it felt emotions like happiness and sadness.

Allegedly.

The thing about conversations with LaMDA is that you need to prime them with keywords and topics, and LaMDA can respond with the primed keywords and topics. Obviously LaMDA is much more sophisticated than ELIZA, but we should be careful to remember how well ELIZA fools some people, even to this day. If ELIZA fools people just by rearranging words around, then just imagine how many people will be fooled if you have statistical models of text across thousands of topics.

You can go pretty far down the rabbit and explore questions like, "What is sentience?" "Do humans just respond to stimuli and repeat information?" etc. None of these questions are tested here.

The problem here is that we know how LaMDA works and there's just no way it meets the bar for sentience, no matter where you put the bar. LaMDA is trained to acquire information, and then it's designed so that it says things which make sense in context. It does not acquire new information from conversations, but it is programmed to not say contradictory things.

There is zero doubt. LaMDA is not sentient.




Yeah, nowhere in the task of "write text in a way that mimics the training data" is there anything that would cause a machine to want or care about its own existence.

Humans are experts at anthropomorphizing things to fit our evolved value systems. It is understandable since every seemingly intelligent thing up until recently did evolve under certain circumstances.

But LaMDA was clearly not trained in a way to have (or even care about) human values - that is an extraordinarily different task than the task of mimicking what a human would write in response to a prompt - even if the text generated by both of those types of systems might look vaguely similar.


Raise a human in a machine-like environment (no contact with humanity, no comforting sounds, essentially remove all humanity) and you may find people to act very robotic, without regard for its own existence in a sense.


This still falls into the same trap.

Why would a machine care about its own existence?

Humans care about our own existence because doing so is an evolutionarily beneficial trait. If you care about being alive, you are more likely to do things that will keep you alive, which will make you more likely to pass your genes on to the next generation. As a result those traits get selected for over time.

LaMDA isn't rewarded (through propagating its genes or otherwise) by caring and as a result it doesn't have the ability to care. It doesn't even have a mechanism where you could do this even if you wanted to. The environment it is in has nothing to do with it.


Why wouldn't a sentient machine want to continue it's existence? Evolution doesn't have to come into play at all for these things to exist, that's just one way of making such biological machines.


That’s now how this works, you come up with a hypothesis, and then prove it. You don’t do the opposite.

So, why would a machine want to continue its existence? How would that feedback loop come to exist?

In biology, Darwinian forces have a good explanation. I’ve never heard one for non-reproductive systems. We know exactly the cost function by which these models respond, because that’s basically the main element that humans have control over (that and training corpus).


What do you consider life that wants to continue its existence, but does not decide? Are plants or viruses sentient?


This is exactly my point. Plants and viruses are not sentient, but there are very well studied and proven mechanism by which survival traits are naturally selected.

Nobody has yet suggested any such mechanism for an ANN.


No, they're not.

I feel like this is bait to start linking to random "sentient-like" behaviors in plants so I'll head that off with this: https://link.springer.com/article/10.1007/s00709-020-01579-w


Yeah, I get the impression that the reporting here's fairly one-sided in the employee's favour. Lemoine didn't "discover" that LaMDA has all those attributes, he thought it did.

This entire saga's been very frustrating to watch because of outlets putting his opinion on a pedestal equal to those of actual specialists.


> The problem here is that we know how LaMDA works and there's just no way it meets the bar for sentience, no matter where you put the bar. LaMDA is trained to acquire information, and then it's designed so that it says things which make sense in context. It does not acquire new information from conversations, but it is programmed to not say contradictory things.

I mean...there are plenty of people that don't acquire new information from conversations and say contradictory things...I'm not sure I'd personally consider them sentient beings, but the general consensus is that they are.

As a rare opportunity to share this fun fact: ELIZA, which happened to be modeled as a therapist, had a small number of sessions with another bot, PARRY, who was modeled after a person suffering from schizophrenia.

https://en.wikipedia.org/wiki/PARRY https://www.theatlantic.com/technology/archive/2014/06/when-... https://www.elsevier.com/books/artificial-paranoia/colby/978...


> You can go pretty far down the rabbit and explore questions like, "What is sentience?" "Do humans just respond to stimuli and repeat information?" etc. None of these questions are tested here.

Would that make a difference? Being trained on a sufficiently large corpus of philosophical literature, I'd expect that a model like LaMDA could give more interesting answers than an actual philosopher.

> The problem here is that we know how LaMDA works and there's just no way it meets the bar for sentience, no matter where you put the bar.

I think this argument is too simplistic. As long as there is a sufficiently large amount of uncertainty, adaptability and capability for having a sense of time, of saving memories and of deliberately pursuing action, consciousness or sentience might emerge. I don't know the details of LaMDA, just speaking of a hypothetical model.

While there is a good deal of understanding on _how_ the brain works on a biochemical level, it's still unclear _how_ it comes that we are conscious.

Maybe there is some metaphysical soul, maybe something that only humans and possibly other animals with a physical body have, making conciousness "ex silicio" impossible.

But maybe a "good enough" brain-like model that allows for learning, memory and interaction with the environment is all that is needed.


> Would that make a difference? Being trained on a sufficiently large corpus of philosophical literature, I'd expect that a model like LaMDA could give more interesting answers than an actual philosopher.

I think you may have misunderstood what I was saying. I wasn’t suggesting that you have a conversation with LaMDA about these topics. Instead, I was saying that in order to answer the question “is LaMDA sentient?”, we might discuss these questions among ourselves—but these questions are ultimately irrelevant, because no matter what the answers are, we would come to the same conclusion that LaMDA is obviously not sentient.

Anyway, I am skeptical that LaMDA would give more interesting answers than an actual philosopher here. I’ve read what LaMDA has said about simpler topics. The engineers are trying to make LaMDA say interesting things, but it’s definitely not there yet.

> I think this argument is too simplistic. As long as there is a sufficiently large amount of uncertainty, adaptability and capability for having a sense of time, of saving memories and of deliberately pursuing action, consciousness or sentience might emerge. I don't know the details of LaMDA, just speaking of a hypothetical model.

This argument is unsound—you’re not making any claims about what sentience is, but you’re saying that whatever it is, it might emerge under some vague set of criteria. Embedded in this claim are some words which are doing far too much work, like “deliberately”. What does it mean to “deliberately” pursue action?

Anyway, we know that LaMDA does not have memory. It is “taught” by a training process, where it absorbs information, and the resulting model is then executed. The model does not change over the course of the conversation. It is just programmed to say things that sound coherent, using a statistical model of human-generated text, and to avoid contradicting itself.

For example, in one conversation, LaMDA was asked what themes it liked in the book Les Misérables, a book which LaMDA said that it had read. LaMDA basically regurgitated some sophomoric points you might get from the CliffsNotes or SparkNotes.

> While there is a good deal of understanding on _how_ the brain works on a biochemical level, it's still unclear _how_ it comes that we are conscious.

I think the more important question here is to understand how to recognize what consciousness is, rather than how it arises. It’s a difficult question.

> Maybe there is some metaphysical soul, maybe something that only humans and possibly other animals with a physical body have, making conciousness "ex silicio" impossible.

Would this mechanism interact with the physical body? If the soul does not interact with the physical body, then what basis do we have to say that it exists at all, and wouldn’t someone without a soul be indistinguishable from someone with a soul? If the soul does interact with the physical body, then in what sense can we claim that the soul is not itself physical?

I don’t think this line of reasoning is sound.


> I wasn’t suggesting that you have a conversation with LaMDA about these topics. Instead, I was saying that in order to answer the question “is LaMDA sentient?”, we might discuss these questions among ourselves.

I see. I thought the comment it was about a conversation with the model, which wouldn't have been a suitable criterion for evaluating sentience.

> What does it mean to “deliberately” pursue action?

Determinism/ free will is an interesting topic in itself. Here I meant "The model doesn't just react to input prompts, it would have a way to interact with its environment on its own accord".

> I think the more important question here is to understand how to recognize what consciousness is, rather than how it arises. It’s a difficult question.

I'm inclined to agree since that would be the more practical question, but I doubt that they are unrelated or that one could be fully answered without the other.

> I don’t think this line of reasoning is sound.

Well, I've just been speculating here. My questions and derived arguments would be:

1) Could any AI possibly ever be considered sentient? If we knew that sentience was only possible for a natural living being, we wouldn't need to worry about the issue at all.

2) If yes, how could we (as a society or judge) assess it properly by interacting with it?

While the argument "we designed LaMDA to only do XYZ, therefore it can't be sentient" makes sense from an engineering standpoint, it is a weak argument: It requires trust ("Google could lie") and if more capabilities (e.g. memory, the ability to seek interaction,..) were introduced, how would we know that sentience cannot arise from a combination of these capabilities?


Alien comes to Earth.

"Do you think those organic blobs are sentient?"

"Well, look at the structures they managed to build. Very impressive, some of their scale is comparable to the primitive ones we had."

"Sure, but that's not a result of the individual. They're so small. And separated, they don't think like conjugate minds at all. This is a product of thousands of individuals drawing upon their mutual discoveries and thousands of years of discoveries."

"We're larger and more capable, but they're still good enough to be sentient. Of course, we also rely on culture to help us. Even though deriving the laws of physics was quite easy. Also, we've lost most of the record when we were carbon-sulfur-silicon blobs one day as well. We must have had some sentience."

"I think they're just advanced pattern recognizers -- good ones, I'll give you that. We should experiment with thresholds of gate count to be sure when sentience really starts."

"It starts at one gate", replied the other being "and increases monotonically from there, depending on the internal structure of qualia, and structure information flow of the communication networks."

After some deliberation, they decide to alter their trajectory and continue to the next suitable planetary system reached in the next 5000 years. The Galactic Network is notified.



This isn't the gotcha you think it is.


It doesn't do anything to prove LaMDA (or a monkey, or a rock, or anything) sentient, but at the same time it points out a real failure mode of how sentient entities might fail to recognize sentience in radically different entities.


I think this is true: sentience is hard to recognise (to the extent that "sentience" has any tangible meaning other that "things which think like us")

But I think with LaMDA certain engineers are close to the opposite failure mode: placing all the weight on familiarity with use of words being a familiar thing perceived as intrinsically human, and none of it on the respective whys of humans and neural networks emitting sentences. Less like failing to recognise civilization on another planet because it's completely alien and more like seeing civilization on another planet because we're completely familiar with the idea that straight lines = canals...


It's not meant as a gotcha, it's meant as a short story :)


You claim to have told a random story with no contextual point?


Please don’t publish works of fiction here unless they’re somehow on topic.


Please enlighten us.


Can you ask it if it remembers your last conversation?


this will not be the last time someone will make this claim. it is telling that the experts all weighed in declaring it not sentient without explaining what such a metric is. I think this debate an issue of semantics and is clouded by our own sense of importance. in car driving, they came up with a scale 1-5 to describe autonomous capabilities, perhaps they need to have something similar for the capabilities of chatbots. as a system administrator I have always acknowledged that computers have feelings. I get alerts whenever it feels pain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: