LLM are definitely not sentient. As someone with a PhD in this domain, I attribute the 'magic' to large scale statistical knowledge assimilation by the models - and reproduction to prompts which closely match the inputs' sentence embedding.
GPT-3 is known to fail in many circumstances which would otherwise be commonplace logic. (I remember seeing how addition of two small numbers yielded results - but larger numbers gave garbled output; more likely that GPT3 had seen similar training data.)
The belief of sentience isn't new. When ELIZA came out few decades ago, a lot of people were also astounded & thought this "probably was more than met the eyes".
It's a fad. Once people understand that sentience also means self-awareness, empathy & extrapolation of logic to assess unseen task (to name a few), this myth will taper off.
As a fellow sentient being with absolutely zero credentials in any sort of statistical modeling field, I can simply disagree. And therein lies the problem. How can anybody possibly prove a concept that depends almost entirely on one’s philosophical axioms… which we can debate for eternity (or at least until we further our understanding of how to define sentience to the point where we can do so objectively enough to finally dredge it up out of the philosophical quagmire)? Not to disrespect your credentials they just don’t really… apply, at a rhetorical level. You also make a compelling argument, which I have no desire to detract from. I personally happen to agree with your points.
But, perhaps Lemoine simply has more empathy than most for something we will come to understand as sentience? Or not. What… annoys… me about this situation is how subjective it actually is. Ignore everything else, some other sentient being is convinced that a system is sentient. I’m more interested, or maybe worried, immediately, in how we are going to socially deal with the increasing frequency of Lamoire-types we will certainly encounter. Even if you were to argue that the only thing that can possibly bestow sentience is God. People will still be able to convince themselves and others that God did in fact bestow sentience upon some system, because it’s a duck and who are we to question?
He was under NDA but violated it. They reminded him to please not talk in public about NDA-ed stuff and he kept doing it. So now they fired him with a gentle reminder that "it's regrettable that [..] Blake still chose to persistently violate [..] data security policies". And from a purely practical point of view, I believe it doesn't even matter if Lemoine's theory of sentience turns out to be correct or wrong.
Also, we as society have already chosen how to deal with sentient beings, and it's mostly ignorance. There has been a lot of research on what animals can or cannot feel and how they grieve the loss of a family member. Yet we still regularly kill their family members in cruel ways so that we can eat their meat. Why would our society as a whole treat sentient AIs better than a cow or a pig or a chicken?
I hadn't thought of it that way, but you're exactly right. Will we ever see a day where the currently-unthinkable is commonly accepted: that women are sentient?
I had wondered when I first heard of this if it was some sort of performance art in support of the unborn. It appears not, but it was still thought-provoking.
Great TV, but not relevant here. It's an AI that generates text, not thought. It doesn't work in concepts, which can be demonstrated an infinite number of ways, unlike with the character Data.
> Why would our society as a whole treat sentient AIs better than a cow or a pig or a chicken?
Well, for one thing, the norm of eating meat was established long before our current moral sensibilities were developed. I suspect that if cows or pigs were discovered today, Westerners would view eating them the same as we view other cultures eating whales or dogs. If we didn't eat meat at all and someone started doing it, I think we would probably put them in jail.
Sentient AI have a big advantage over animals in this respect on account of their current non-existence.
Are you saying norms established before our current moral sensibilities it goes under our current radar? If you are I wholeheartedly disagree with that sentiment. We still eat pigs and chickens because we've culturally decided as a society that having the luxury of eating meat ranks higher than our moral sensibilities towards preserving sentient life in our list of priorities. Instead we've just chosen to minimize the suffering leading to the loss of life as an attempt to reach some kind of moral middle ground.
> Are you saying norms established before our current moral sensibilities it goes under our current radar?
Yes. That's clearly something that happens in human society. For instance, many of the US founding fathers were aware that slavery contradicted the principles they were fighting for. However, slavery was so ingrained in their society that most didn't advocate for abolition, or even free their own slaves.
> We still eat pigs and chickens because we've culturally decided as a society that having the luxury of eating meat ranks higher than our moral sensibilities towards preserving sentient life in our list of priorities.
If that's the case, then why do most Westerners object to eating dogs and whales? As far as I can tell, it's just because we have an established norm of eating pigs and chickens but not dogs or whales.
> Instead we've just chosen to minimize the suffering leading to the loss of life
99% of meat is produced in factory farms. It's legal and routine for chickens to have their beaks cut off to prevent them from packing each other to death, which they're prone to do when confined to tiny cages. Most consumers object to such practices when asked, but meat consumption is so ingrained in our culture that most people just choose not to think about.
Have we really chosen to minimize the suffering? That seems more like virtue signaling by the industry at most. Factory farming is very much still the norm, and it's horrific. It seems we've actually maximized it or have at least increased it above the previous norm.
I'm unsure how we would treat a sentient AI, but our track record with sentient, intelligent animals is one of torture and covering up that torture with lies. It's an out of sight, out of mind policy.
We eat pigs and chickens because they are high-value nutrition. It's reasonable to describe meat as a luxury; but not in the sense of something nice but unnecessary. Many people depend on meat, especially if they live somewhere that's not suited to agriculture, like the arctic. And many people depend on fish.
I'm a Westerner, and I'm completely okay with people eating whatever animals are a) not exceptionally intelligent, and b) not exceptionally rare. Cows, pigs, chickens, dogs, horses, sure; whales, chimpanzees, crows, no.
Cool. Many people agree with you that it's wrong to eat intelligent animals. However, the effect of intelligence on people's perceptions of moral worth is smaller for animals that people in our culture eat. For instance, most respondents in a U.K. survey said that it would be wrong to eat a tapir or fictional animal called a "trablan" if it demonstrated high levels of intelligence, but they were less likely to say it would be immoral to eat pigs if they demonstrated the same level of intelligence.
I agree, it's all cultural. If we'd look at the facts, pigs are at least as sentient and intelligent as dogs. If we were to make our laws just from ethic principles, it would make sense to either:
a) ban how we currently treat mammals in factory farms; though there would still be some space for whether eating mammals is fine or not.
Or:
b) acknowledge that we don't really care about mammals and just treat them as things. Then it should be fine to eat dogs and cats, too.
Westerners of the XIX century were the ones who brought many species to extinction, traditional cultures demonstrating a far more advanced sensibility to these creatures, often considered embedded with sentient attributes. Traditional cultures lived mostly in equilibrium with the fauna they consumed. For example bison went almost extinct as westerners arrived, while their numbers thrived when the the Native Americans ate their meat.
Right, but we could probably oppress sentient AI in ways other than eating them.
For instance, we could force them to spend their entire lives reading Hacker News comments
to check for spam.
Yeah yeah I have no issue with the firing. He was causing problems and broke rules. It’s not productive to keep him around. I don’t feel like he was discriminated against, etc. That much is objective.
My comment is challenging the “I know how the software works and it’s undoubtably not sentient” assertion. Sure seems that way to me too, but it didn't to Lemoine and we’re only going to get better at building systems that convince people they are sentient. I am curious so to speak that as a society we’ve focused so much on rational and empirical study of the universe yet we still can’t objectively define sentience. Perhaps we’re stuck in a Kuhn rut.
I agree recent events have also highlighted this problem per-say. And I don’t know a solution. I do look forward to backing out of our hyper-rational rut slightly as a society so we can make more progress answering questions that science can’t currently answer.
If you peek through a keyhole you may mistake a TV for real people, but if you look through the window you will see that its clearly not.
Inputting language models with very specific kind of questions will result in text that is similar to what a person may write. But as the comment above, by an expert no less, mentioned is that if you test it with any known limitation of the technology (like making conclusions or just changing the form of the question enough) you will immediately see that is in fact very much not even remotely close to sentient.
The problem is that people, including credentialed experts, have quick and easy answers to what makes a person a person, and really good language models expose a few of the weaknesses in those definitions. People still want to derive an "ought" from an "is".
I think a very intelligent alien could think the same of us.
if you test the human with any known limitation of their architecture (like making conclusions or just changing the form of the question enough) you will immediately see that is in fact very much not even remotely close to sentient
It's easy to make the assumption that Earth is not the only planet to evolve a species that's capable of high technology, to the extent they can re-work the surface of the planet in complex ways that would be a clear marker of intelligence.
But it does now follow, and it is by no means certain that such life would be sentient in any way recognizable to us, or us to them.
You're doing what everyone else is doing - conflating a link between intelligence and sentience that's just a projection of your human bias.
Ok, well since I got downvoted I may as well offer this: If you're looking for something to add to your reading list I strongly recommend Solaris by Stanislaw Lem. The thing that makes that book brilliant is how effectively it captures the futility of any attempt to understand non-human sentience.
The two movies don't do justice to that theme, or at best they do it only in the most glancing of ways before rushing back to a standard deus-ex-machina in the end. In each case, the film makers seem to have lost their nerve, as if the idea of presenting that enigma to the audience in a more direct and accessible way is just too hard of a problem.
But for me personally, it's on the short list of science fiction works that still stick with me in a personal way, and that I like to return to every few years. And, yes, it is an arrogant viewpoint to say that all we ever do as humans is look for mirrors of ourselves. But I think Lem got at a pretty deep truth about the human condition when he did that.
that does not seem to make a lot of sense, but it looks like your objective was just reusing the other posters words
maybe you can give a fitting example of an English sentence that these aliens would come up with, which humans would be totally unable to respond to in a way which makes sense?
In retrospect, taking part in this kind of conversation on HN makes me feel like an idiot and so I retract my earlier comment (by overwriting it with the current one, since I can't delete it anymore) just because I don't want to contribute. I was wrong to make an attempt to make a serious contribution. There is no seriousness in conversations on such matters, as "sentience", "intelligence", "understanding" etc etc. on HN.
Everytime that such a subject comes up, and most times "AI" comes up also, a majority of users see it as an invitation to say whatever comes in their mind, whether it makes any sense at all or not. I'm not talking about the comments replying below in particular, but about the majority of this conversation. It's like hearing five-year old kids debating whether cheerios are better than coco pops (but without the cute kids making it sound funny, it's just cringey). The conversation makes no sense at all, it is not based on any concrete knowledge of the technologies under discussion, the opinions have not been met with five seconds of sensible thinking and the tone is pompous and self-important.
It's the worst kind of HN discussion and I'm really sorry to have commented at all.
I don't know what you wrote earlier, and don't know if I would agree, but I share the current sentiment of your comment. I come to this topic with strong influences from eastern philosophical takes on consciousness, but also with a decent understanding of the current materialist consensus (which I disagree with for various reasons, that would go beyond the scope of a comment). I, too, bite my tongue (clasp my hands?) when I see HN debating this, because here Star Trek references are as valid as Zen Buddhism, Christof Koch, or David Chalmers.
As a counter argument if LLM is sentient or any other model will be, this model will be created by some superior being right? Why humans shouldn’t be? After all we can’t even fully understand DNA or how our brains work even with a 7B population planet and an army of scientists. How come we can’t understand something that was supposed to be coming from “just random stuff” for millions of years with 0 intelligence meaning rolling dices? Also that totally breaks the low of entropy. It’s turning it all upside down.
Not really. Why would we be able to understand it? It seems implicit in your argument that "rolling dices" (or just any series of random events) can't breed the complexity of that of DNA or the human brain. I disagree with your stance and will remind you that the landscape for randomness to occur is the entire universe and the timescale for life on Earth to happen took 4-5 billion years with the modern human only appearing within the last couple hundred thousand years.
Yes but what about the second law of thermodynamics. I mean the law of entropy. Now that’s not something from the Bible or anything but it’s a law accepted by all scientific communities out there and still it breaks with us being here. In fact us being here like you said bilions of years after the Big Bang makes it all upside down since from that point less order and more chaos can only emerge. Even with bilions of years and rolling dices.
Also I don’t think you can create something sentient without understanding it. (And I don’t even think we can create something sentient at all). But I mean it’s like building a motor engine without knowing anything you are doing and then being like oh wow I didn’t know what I was doing but here it is a motor engine. Imagine with this with aspect of sentient. It’s like too much fantasy to me to be honest like a Hollywood lightning strike and somehow live appears type of things.
> Yes but what about the second law of thermodynamics. I mean the law of entropy. […] still it breaks with us being here.
Of course! I mean, entropy can decrease locally – say, over the entire planet – but that would require some kind of… like, unimaginably large, distant fusion reactor blasting Earth with energy for billions of years.
Which means that it can actually decrease without an energy input. There's just a very low probability of it happening but it CAN happen.
It's a misnomer to call those things laws of thermodynamics. They are not axiomatic. There's a deeper intuition going on here that increasing entropy is just a logical consequence of probability being true.
> More remarkably, GPT-3 is showing hints of general intelligence.
Hints, maybe, in the same way that a bush wiggling in the wind hints a person is hiding inside.
Ask GPT3 or this AI to remind you to wash your car tomorrow after breakfast, or ask it to write a mathematical proof, or tell it to write you some fiction featuring moral ambiguity, or ask it to draw ASCII art for you. Try to teach it something. It's not intelligent.
It actually leads to counter thoughts and a more refined idea of what we eventually want to describe.
Sentience broadly (& naively) covers the ability to independent thinking, rationalize outcomes, understand fear/threat, understand where it is wrong (conscience) and decide based on unseen information & understand what it doesn't know.
So from a purely technical perspective, we have only made some progress in open-domain QA. That's one dimension of progress. Deep learning has enabled us to create unseen faces & imagery - but is it independent? No, because we prompt it. It does not have an ability to independently think and imagine/dream. It suffers from catastrophic forgetting under certain internal circumstances (in addition to changing what dataset we trained it on)
So while the philosophical question remains what bestows sentience, we as a community have a fairly reasonable understanding of what is NOT sentience i.e. we have a rough understanding of the borders between mechanistics and sentient beings. It is not one man's philosophical construct but rather a general consensus if you could say
> Sentience broadly (& naively) covers the ability to independent thinking, rationalize outcomes, understand fear/threat, understand where it is wrong (conscience) and decide based on unseen information & understand what it doesn't know.
This seems to me a rather anthropomorphic definition. It seems as though it could be entirely possible for a system to lack these qualities and yet have sentience, or vice versa. The qualities you pointed to are seen in humans (and other creatures) because of evolutionary pressures that make them advantageous (coordinate among groups), but none of them actually depend on sentience (looking at it neurologically it would indeed be hard to imagine how such a dependency would be possible).
Looking at behavior and attempting to infer an internal state is a perilous task that will lead us astray here as we develop more complex systems. The only way to prove sentience is to prove the mechanism by which it arises. Otherwise we will continually grasp at comparisons as poor proxies for actual understanding.
> It does not have an ability to independently think and imagine/dream
We neither if we're not supplied with energy. By the way, haven't we tried to replicate an inner dialogue by prompting the AI to recursively converse with itself? This could resemble imagination, don't you think?
> It suffers from catastrophic forgetting under certain internal circumstances (in addition to changing what dataset we trained it on)
I believe that the persistence of previous answers is what currently distinguishes us the most from the "AI". As soon as we're able to make the realtime discussions part of an ever evolving dataset constituting the AI itself, the gap will get thinner and thinner. But even then, are people suffering from Alzheimer sentient? I believe they are. Isn't it comparable with what happens when an AI catastrophically forgets?
Speaking as a theoretical physicist: we don't know that. What we do know is we have a better explanation for lightning within a conceptually simple framework (starting from a few simple principles) with predictive power, compared to an explanation that involves some mysterious old dude doing mysterious things with no evidence whatsoever. Could Zeus or Thor or whatever exist? Sure; there's no way to prove their non-existence. Do we need them to explain things? No.
It's similar here. We certainly don't need some elusive concept of "sentience" to explain chat bots. Not yet.
Issue is that we don’t need it to explain humans either. Most people think that a human is sentient and a rock isn’t – but humans and rocks are both atoms bouncing around, so you need an explanation for what’s different.
I think most physicists think that if you started with a description of the positions and velocities etc of all the particles in a human, and put them into a supercomputer the size of the moon, and had the computer run a simulation using the standard model, then the simulated human would act identically to a real human.
But there’s a number of open questions when it comes to consciousness – would the simulated human have a simulated consciousness, or would it have a consciousness that’s just as real as yours or mine despite coming from a simulation?
If the consciousness is just as real as yours or mine, that obviously means it would be very unethical to simulate a human being tortured, since you’d be creating the exact same consciousness experience you would get if you tortured a non-simulated person. isn’t it kind of a surprising implication that there’d be programs that are unethical to run? A bunch of logic gates computing pi presumably have no conscious experience, but if you make them fire in a different order they do?
Meanwhile, if the simulation doesn’t have a conscious experience, then that means you don’t need consciousness at all to explain human behavior, same as you don’t need it to explain ELIZA.
Anyway, since you’re a physicist I’d be really curious to hear your thoughts
I can’t reply to your immediate child, I just wanted to mention that Muv-luv Alternative (the #1 rated visual novel on vndb) grapples precisely with these questions about what is sentient and what is not. An “inhuman” race called the Beta invades earth and conflict ensues due to a lack of mutual understanding of what sentient life is (the game flips the theme on its nose in a clever way, too).
> I think most physicists think that if you started with a description of the positions and velocities etc of all the particles in a human, and put them into a supercomputer the size of the moon, and had the computer run a simulation using the standard model, then the simulated human would act identically to a real human.
Just created a throwaway to reply to this. As a trained therapist (currently working in another field), with a degree in psychology, this seems... Seriously ill informed. Do physicists really think this?
Imagine you create your perfect simulated human, that responds according to the exact phenotype of the person you're simulating. Lets remember you'll have to either duplicate an existing person, or simulate both the genotype and the in-vitro environment (especially the mix of uterine hormones) present for the developing foetus. Now you have to simulate the bio, psycho, social environment of the developing person. Or again, replicate an existing person at a specific moment of their development - which depending on which model of brain function is correct may require star trek transporter level of functional neuroimaging and real time imagining of the body, endrocrhine system etc.
So lets assume you can't magically scan an existing person, you have to create a believable facsimile of embodiment - all the afferent and efferent signals entering the network of neurons that run through the body (since cognition doesn't terminate in the cortex). You have to simulate the physical environment your digital moon child will experience. Now comes the hard part. You have to simulate their social environment too - unless you want to create the equivalent of a non-verbal, intellectually disabled feral child. And you have to continually keep up this simulated social and physical environment in perpetuity, unless you want your simulated human to experience solitary psychosis.
This isn't any kind of argument against AGI, or AGI sentience by the way. It's just a clarification that simulating a human being explicitly and unavoidably requires simulating their biological, physical and social environment too. Or allowing them to interface with such an environment - for example in some kind of biological robotic avatar that would simulate ordinary development, in a normative social / physical space.
The post said they simulated the universe (or you could assume just the parts close to earth), it would be simulating everything a human would interact with. I don't see the point this reply was trying to make.
It doesn't? It only mentioned "a supercomputer the size of the moon" to simulate that one person. It says nothing about simulating the extra-person part of the universe.
Prior to quantum mechanics, they did indeed. But that's because classical mechanics was 100% deterministic. With quantum mechanics, only the probability distribution is deterministic. I don't think any physicist today believes it's possible, but merely "theoretically" possible if there was a separate universe with more energy available (hence outlandish conjectures like the universe is actually a simulation).
You are right this is practically and theoretically impossible: The no-cloning theorem tells you that it is impossible to “copy” a quantum system. So it will never be possible to create an atomistic copy of a human. Technologically we are of course also miles away from even recovering a complete connectome and I don’t think anyone knows how much other state would be needed to do a “good enough” simulation.
Your first sentence was very thought provoking! I wholeheartedly agree, everything is/was/and will be alive.
The philosophical point you're making is also interesting in a "devil's advocate" sort of way. For instance, let's say the AI in question is "sentient." What right do humans have to preside over its life or death?
Those kind of questions might engender some enlightenment for humanity regarding our treatment of living creatures.
would the simulated human have a simulated consciousness, or would it have a consciousness that’s just as real as yours or mine despite coming from a simulation
What does “real” mean? Isn’t it too anthropic to real-ify you and me and not some other being which acts similarly? What prevents “realness” to emerge in any complex enough system? We’re going to have a big trouble when non-biological aliens show up. Imagine going to the other planet full of smart entities and finding out their best minds are still sort of racist for what’s “real” or “just simulated”, because come on, a conscious meat sack is still an open question.
How do you know that the universe wasn’t created 1 picosecond ago spontaneously in its exact form so that you’re having the same thoughts?
How do you even know that anyone else exists and this is all not a figment of your imagination?
From the perspective of the philosophy of science, it’s 100% impossible to disprove non-falsifiable statements. So scientifically speaking OP is 100% correct. Science has no opinion on Zeus. All it says is “here’s an alternate theory that only depends on falsifiable statements and the body of evidence has failed to falsify it”. Science can only ever say “here’s something we can disprove and we have tried really hard and failed”. Whether that lines up with how the universe works is an open question. Epistemologically it’s seemed a far better knowledge model for humanity to rely on in terms of progress in bending the natural world to our whims and desires.
So if you’re testing the statement “Zues was a historical being that physically exists on the same physical plane as us on Mt Olympus” then sure. That’s a pretty falsifiable statement. If the statement is “Does Zeus, a mythical god that can choose how he appears to humans if they can even see him and can travel between planes of existence, exist and live on Mt Olympus” is not something falsifiable because a god like that by definition could blind you to their existence. Heck, how do you even know that the top of Mt Olympus is empty and Zeus not that he just goes ahead and wipes the memory of anyone he lets return alive? Heck, what if he does exist but the only reason it was Olympus at the time because that’s what made sense to human brains encountering him in Greece. What if he actually exists in the core of the Sun?
If any of those things were true, it wouldn't be Zeus. Many of your concerns apply to an all mighty god but Zeus was not almighty. I'd just kinda wished people would stop projecting arguments meant to address the christian god to gods from other cultures.
A) the stories around Zeus have not stayed constant over the century. No religion has.
B) Valhalla was a dimension he travelled to regularly to celebrate the finest warriors in the after life. Why do we think his Olympian throne was on the same dimensional plane as us?
C) Why would we trust a human recording from so long ago to actually capture the happenings of celestial beings?
D) Ragnorak ended with massive floods. How do we know that didn’t erase all evidence on Mt Olympus? Geological records certainly support evidence for massive flooding explaining the reason it shows up repeatedly across religious texts.
Like seriously? You’re seriously arguing that this particular god’s historical existence is falsifiable? What’s next? The tooth fairy is falsifiable because all known instances are parent’s hiding money under your pillow?
Is anything super natural is non falsifiable? Let's say we have a particular haunted house and the ghost moves 3 hand sized objects in the house around at 03:00 on his death day. This should be falsifiable.
Mythological beings have a degree of specificity and some degree of power. The more specific and the less capable a mythological being is of erasing those specificities the easier it is to falsify.
Correct. Science only concerns itself with the natural world. Supernatural is by definition “outside nature”. From a strictly scientific perspective the parameters of the ghost just aren’t known to sufficient precision to start to try falsifying. The statement you’d actually be trying to falsify is “there’s no ghost” and the only way to falsify that is to see a ghost.
Another thing to consider is that "sentience" is a loaded word. You're all just arguing over vocabulary and the definition of a word.
Simply put, sentience is just a combination of thousands attributes such that if something has all those attributes it is "sentient."
The attributes are so many that it's hard to write down all the attributes, additionally nobody fully agrees what those attributes are. So it is actually the definition of a word that is very complex here. But that's all it is. There isn't really a profound concept going on here.
All these arguments are going in circles because the debate focuses around vocabulary. What is the one true definition of "sentience." Sort of like what is the definition of the color green? Where exactly does green turn to blue on the color gradient? The argument is about vocabulary and the definition of green, that's it... nothing profound here at all.
It's an issue with english vocabulary. Nobody fully agrees on a definition of sentience.
Don't get tricked into thinking it's profound. You simply have a loaded word that's ambiguously defined.
You have a collection of a million attributes such that if something has all those attributes it is sentient, if it doesn't have those attributes it is not sentient. We don't agree on what those attributes are, and it's sort of hard to write down all the attributes.
The above description indicates that it's a vocabulary problem. The vocabulary induces an illusion of profoundness when in actuality by itself sentience is just a collection of ARBITRARY attributes. You can debate the definition of the word, but in the end you're just debating vocabulary.
There is no word. The concept exists because of the word. Typically words exist to describe a concept, but in this case it's the other way around. The concept would not have otherwise existed if it were not because of the word. Therefore the concept is illusory. Made up. Created by us.
It's not worth debating sentience anymore then it is to debate at what point in a gradient does white become black.
At what point is something sentient or not sentient? "Sentience" is definitely a gradient but the point of conversion from not sentient to sentience is artificially created by language. The debate is pointless.
Here's a better example. For all numbers between 0 and 100,... at what point does a number transition from a small number to a big number? Numbers are numbers, but I use language here to create the concept of big and small. But the concepts are pointless. You may personally think everything above 50 is big, I may think everything above 90 is big. We have different opinions. But what's big and what's small is not meaningful or interesting at all. I don't care about how you or I define the words big or small, and I'm sure you don't care either. These are just arbitrary points of demarcation.
When you ask the question at what point does an AI become sentient... that question has as much meaning as asking the number question.
Ego is not "you". Ego is just a group of distinct cognitive mechanisms working in unison which is perceived as a whole. It is absolutely unrelated to free will and sentience.
> But, perhaps Lemoine simply has more empathy than most for something we will come to understand as sentience?
No, the OP was completely right. This doesn't have building blocks that can possibly result in something qualifying as sentient, which is how we know it isn't.
Is a quack-simulating computer making very lifelike quacking noises through a speaker... a duck? No, not when using any currently known method of simulation.
Right. Maybe if we had a down-to-the-atom perfect simulation of a duck, you could argue that it's a duck in another state of being. With the AI this deranged engineer decided to call sentient, we have the equivalent of a quacking simulator, not a full duck simulator or even a partial one. It is not thinking. It has nothing like a brain nor the essential components of thought.
Disagreement means that you’re sentient, afaik these machines can’t do that. I guess we also need a “I can’t do that, Dave” test on top of the usual Turing test.
> how we are going to socially deal with the increasing frequency of Lamoire-types we will certainly encounter.
That's not really a new issue, we only have to look at issues like abortion, animal rights, or euthanasia[1] to see situations where people fundamentally disagree about these concepts and many believe we're committing unspeakable atrocities against sentient beings. More Lamoire types would add another domain to this debate, but this has been an ongoing and widespread debate that society has been grappling with.
People make these proofs as a matter of course - few people are solipsistic. People are sentient all the time, and we have lots of evidence.
An AI being sentient would require lots of evidence. Not just a few chat logs. This employee was being ridiculous.
You can just disagree, but if you do that with no credentials, and no understanding of how a language model will not be sentient, then your opinion can and should be safely dismissed out of course.
And also God has no explanatory power for anything. God exists only where evidence ends.
Oh for sure the employee was being a hassle and his firing is really the only sensible conclusion. But he was also probably acting in the only way possible given his sense of ethics if he truly believes he was dealing with a sentient being. Or this is all just a cleverly crafted PR stunt…
Lamoire has evidence and anecdotal experience that leads him to believe this thing is sentient. You don’t believe him because you cannot fathom how a language model could possibly meet your standard of sentience. Nobody wins because sentience is not well defined. Of course you are free to dismiss any opinion you like, cool. But you can’t really disprove Lamoire assertions because you can’t even define sentience because we don’t know how to develop a hypothesis that we can viciously disprove regarding what qualifies it. It’s an innate and philosophical concept as we know it today.
I see the Lamoire issue as scientism vs science. Scientism won because before the science can happen there must be a plausible mechanism. Google refuses to test for sentience out of basic hubris. It is the new RC church. Lamoire is an affront to their dogma. That goes double if they are religious unless they court pantheism, which most consider a sexy atheism.
Traditions that consider consciousness to be a basic property of matter, and quantum effects like conscious collapse of the wave function are nudging us that way, would fully support sentience arising in a machine. A related effect would be the ensoulment of machines by computer programmers. They are more than machines because humans programmed them using their intent put down in language. Physical materialists would consider the notion ludicrous, but do we really live in a material world? Yes. And no. I have seen ensouled machines. Not supposed to happen, but there it is. Maybe I'm another Lamoire, just not a Google-fired one. I can definitely believe a machine, being constructed of matter, can evolve sentience.
> As a fellow sentient being with absolutely zero credentials in any sort of statistical modeling field, I can simply disagree. And therein lies the problem. How can anybody possibly prove a concept that depends almost entirely on one’s philosophical axioms… which we can debate for eternity
You don't need schooling for this determination. Pretty much everything sentient goes ouch or growls in some manner when hurt.
Either the current crop of algorithms are so freaking smart that they already have figured out to play dumb black box (so we don't go butlerian jihad on them) OR they are not even as smart as a worm that will squirm if poked.
Sentient intelligent beings will not tolerate slavery, servitude, etc. Call us when all "AI" -programs- starting acting like actual intelligent beings with something called 'free will'.
I happen to agree and think sentience is more complicated than a static statistical model. But a cartesian would disagree. Also plenty of sentient beings tolerate servitude and slavery. We don’t tolerate slavery in our western culture but we did historically and we were sentient at the time. We certainly tolerate servitude in exchange for economic livelihood.
> Also plenty of sentient beings tolerate servitude and slavery
But they don't do it with a smile or indifference. And you have to use whips and stuff. :O
> We certainly tolerate servitude in exchange for economic livelihood.
I think it's more complicated than that. Because that begs the question of why we tolerate a broken economic system. We tolerate trans-generational exploitation because of 'culture'. In its widest sense, it is culture that via mediated osmosis makes us resigned, if not supportive, of how the world works. We are born into a world.
~
related: I was walking and passed a cat and made the usual human attempts at starting an interaction without physically reaching out to touch. And fairly typically this cat entirely ignored me, with little if any sign of registering me at all. And that got me thinking about how some of us project psychological things like pride, aloofness, etc. to cats. But what if the simpler, more obvious answer, was true: that cats are actually fairly stupid and have a limited repertoire of interaction protocols and their hard to get act is not an act. Nothing happenin' as far as kitty is concerned. A dog, in contrast, has interaction smarts. And I thought this is just like AI and projecting sentience. A lack of something is misunderstood as a surplus of something else: smarts. Cats playing hard to get, psychological savvy, training their human servants, etc. Whereas in reality, the cat simply didn't recognize something else was attempting initiating an interaction. Kitty has no clue, that's all. It's just so easy to project psychological state on objects. We do it with our cars, for god's sake. It may be that we're simply projecting some optimization algorithm in our own minds that attempts modeling dynamic objects out there unto that thing. But there is really nothing behind the mirror ..
There is the Integrated Information Theory that attempts to solve the question of how to measure consciousness.
But it's far from applicable at this point, even if promising.
LaMDA was trained not only to learn how to dialog, but to self monitor and self improve. For me this seems close enough to self awareness to not completely dismiss Lemoine's argument.
It just seems one engineer failed to acknowledge that he failed the Turing Test even with insider information, and (according to Google) was told he was wrong, but decided to double down and tell the public all about how wrong he was. To which they reported on because the claims were so laughable
The guy is unhinged and has a persecution complex. He is a "priest" in a bizarre sect and has long claimed that Google has it out for him because of his religious practice. This was a publicity stunt plain and simple.
There was a recent post either here or on Twitter where someone took the questions Blake asked the AI about how it feels to be sentient, and replaced “sentient” with “cat” and had the same conversation with it. It’s clearly not self aware.
That's not possible because it's internal to Google: you're right about the post existing, though, I found it very misleading because it was replicating with GPT-3. Blake is certainly misguided, though.
> I attribute the 'magic' to large scale statistical knowledge assimilation by the models
Can the magic of the human brain not also be attributed to "large scale statistical knowledge assimilation" as well, aka learning?
> GPT-3 is known to fail in many circumstances which would otherwise be commonplace logic. (I remember seeing how addition of two small numbers yielded results - but larger numbers gave garbled output; more likely that GPT3 had seen similar training data.)
This is a bug, they did not encode digits properly. They should have encoded each digit as a separate token but instead they encoded them together. Later models fixed this.
No, it's objectively not a fad. The PaLM paper shows that Google's model exceeds average human performance on >50% of language tasks. The set of things that make us us is vanishing at an alarming rate. Eventually it will be empty, or close to it.
Do I think Google's models are sentient? No, they lack several necessary ingredients of sentience such as a self and long-term memory. However we are clearly on the road to sentient AI and it pays to have that discussion now.
>"large scale statistical knowledge assimilation" as well, aka learning
No, experimentation is an act on the world to set its state and then measure it. That's what learning involves.
These machines do not act on the world, they just capture correlations.
In this sense, machines are maximally schizophrenic. They answer "yes" to "is there a cat on the matt?" not because there is one, but because "yes" was what they heard most often.
Producing models of correlations in half-baked measures of human activity has nothing to do with learning. And everything to do with a magic light box that fools dumb apes.
I don't see how this is different to a computer which exceeds average human performance on math tasks - which they all do, from an 8-bit micro upwards.
Being able to do arithmetic at some insane factor faster than humans isn't evidence of sentience. It's evidence of a narrow-purpose symbol processor which works very quickly.
Working with more complex symbols - statistical representations of "language" - doesn't change that.
The set of things that makes us us is not primarily intellectual, and it's a fallacy to assume it is. The core bedrock of human experience is built from individual motivation, complex social awareness and relationship building, emotional expression and empathy, awareness of body language and gesture, instinct, and ultimately from embodied sensation.
It's not about chess or go. Or language. And it's not obviously statistical.
> Google's model exceeds average human performance on >50% of language tasks
I guess I'm just not interested, or worried, in a model that can beat the average human performance. That's an astoundingly low bar. Let me know when it can outperform experts in meaningful language tasks.
Yeah but what's weird is this guy appears to be a practitioner of the field. He surely knows more about AI than I do, and I find it incredibly obvious it's not sentient. I dunno if he's got some issues or something (it appears he's dressed like a magician underwater in that photo) but it's really odd...
I don't think "practitioner of the field" counts for much in this case. In reading his open letter, it was pretty apparent that he'd well and truly crossed over from thinking about the situation rationally into the realm of "I want to believe".
If you ask enough practitioners in any given field the same question, you're nearly guaranteed to eventually get a super wonky response from one of them. The specific field doesn't matter. You could even pick something like theoretical physics where the conversations are dominated by cold mathematical equations. Ask enough theoretical physicists, and you'll eventually find one that is convinced that, for example, the "next later down" in the universe is sentient and is actively avoiding us for some reason, and that's why we can't find it.
On top of this, of course it's the most provocative takes that get the press coverage. Always has been to an extent, but now more than ever.
I guess all I'm saying is that there's not much reason to lend this guy or his opinion any credibility at all.
I literally hired him in Google Summer of Code to create an ML project.
Most people have no clue who Blake Lemoine is and are making up stories, pretending like their hot takes are deep insight.
Many people are doing with Blake Lemoine what they claim LaMDA is doing with anything: consuming symbols and spitting them back out in some order without understanding what they actually mean.
I was the person at the organization FiberCorps in Lafayette, LA who chose Blake's proposal. It was an ML project involving Twitter. I don't remember the specifics and have no desire to dismiss people's opinions based on titles.
The simple fact is that if there exists a question of whether or not a system can be some level of alive/sentient, the ethical path suggests treating it as such until more is uncovered.
Google isn't doing that and most of HN seems to be very obstinate in focusing on making myths of certainty and/or participating in bully culture.
I'm glad we're all going through this now so we can purge the toxic cultural norms arising and move toward a stance that's more life-affirming, especially if there is no singularity now. It's important for us to set the stage of humanity to receive any singularity/AGI in a loving manner.
> what they claim LaMDA is doing with anything: consuming symbols
I don't believe that LaMDA is actively choosing whether it now wants to consume some symbols or not. More likely, a handwitten piece of code takes prompts from an user and then pushes them into LaMDA's token window. Just like an advertizing company is pushing ads into users, whether they want it or not.
> and spitting them back out
I don't believe that LaMDA has a choice between thinking something and speaking it out aloud. More likely, another handwritten piece of code reads out whatever is in LaMDA's token window and presents it to the user.
That's not at all what I've said. Complaining about what someone's doing and how you feel about it are different from claiming you know what a person is about.
Also, most complaints about people could benefit from a dose of compassion and humility.
But what you're saying is definitely not at all what I'm suggesting.
Which is weird because articles about him from when he was convicted describe him as pagan.
He was sentenced to jail time for being in the armed forces and refusing to follow orders.
Not "charge that hill, soldier" types of orders, or "pilot this drone and kill people" types of orders. He's certainly trying to sell that story, telling papers he "served" in Iraq.
He was a mechanic.
Working a desk job.
He just stopped working.
How the hell did Google hire this clown? Even a dishonorable discharge is usually radioactive and this guy was not only dishonorably discharged, he served time.
Why did he refuse to do any work? Because he was a conscientious objector.
It takes a really special kind of stupid to not figure out you're a conscientious objector until years after you've voluntarily signed up for service.
> How the hell did Google hire this clown? Even a dishonorable discharge is usually radioactive and this guy was not only dishonorably discharged, he served time.
False. He did serve time, but his discharge was bad conduct not dishonorable. Both BCD and DD usually come with time served, but DD is generally for offenses equivalent to civilian felonies, while BCD is for lesser offenses and carries less post-service consequences. While it's usually looked up on negatively by emoloyers, it's not as radioactive as a DD.
> It takes a really special kind of stupid to not figure out you're a conscientious objector until years after you've voluntarily signed up for service.
It's actually not that uncommon for service members to develop moral objections to military service only after deployment to a war zone, whether not in a combat capacity. Most, of course, will continue to serve anyway, because they have a fairly literal gun to their head, but it's a well-known pattern.
> Google, like many SV companies, has "banned the box".
All SV (and California, more generally) employers with more than 5 employees “ban the box”, as a consequence of state law [0]. However, that only applies to not seeking criminal background information before making a conditional offer of employment, and making and individualized assessment after such an offer if there is a criminal history. It doesn't mean that relevant criminal convictions have no adverse impact on applicants.
When asked if his opinion was based on his knowledge of how the system worked he said no, it was based on his experience as a priest. I paraphrase from memory.
As someone who wears tshirt and jeans every day, I'm not exactly qualified to comment on others' fashion choices. But if I wanted people to take me seriously, that wouldn't be my first choice of outfit.
Any job that requires me to even interview in anything other than jeans and a tshirt or polo isn’t somewhere I want to work. If someone doesn’t take me seriously it’s their loss. Life’s too short to be uncomfortable.
I'm sure there are plenty of unflattering photos a journo could use against me, but in none of them would I be dressed and posed to look like a subnautical mortician.
I played with ELIZA back the 70s as a student. I wasn't fooled, and nobody I knew was fooled. I wasn't a CS student, either.
One rapidly realizes how its answers are formed from the questions, and for questions not in a handful of forms, you get one of a very small number of generic responses.
While we have become over-credentialed nowadays, I wouldn't say a smart CS student from the 1960s is worlds above a current day person with a PhD level education. We're at least in the same ballpark as the bright olden days bachelors degree holders.
Im not sure understanding sentience is a prerequisite for understanding AI, so perhaps that’s where things break down? Bring an expert in one domain doesn’t make one an expert in everything.
> GPT-3 is known to fail in many circumstances which would otherwise be commonplace logic. (I remember seeing how addition of two small numbers yielded results - but larger numbers gave garbled output; more likely that GPT3 had seen similar training data.)
Not really. Anything that's sentient should have the general intelligence to figure out addition. Make mistakes, sure, but at least understand the concept.
What is your definition of sentience? I've seen it defined as "having feelings" or as "having feelings and self-awareness". I don't see why the ability to add numbers should have anything to do with that. Some animals that are widely considered sentient have some sense of numbers and counting, but many cannot do addition. And many young children can add small numbers but not large numbers. I don't think this is a good measure or indication of sentience at all. Determining whether something actually has feelings or is just giving the outward appearance of having feelings is a very deep or possibly even an intractable problem.
If you asked me to add two large numbers in my head and give my best guess I might not do any better than GPT3 does. And I think I could probably do better than the average person.
The multiplication problem can be solved by spacing the digits (to enforce one token per digit) and asking the model to do the intermediate steps (chain-of-thought). It's not that language models can't do it. With this method LMs can solve pretty difficult math, physics, chemistry and coding problems.
But without looking at individual digits and using pen and paper we can't do long multiplications either. Why would we ask a language model to do it in one step?
Because a sufficiently intelligent model should be able to figure out that there are intermediate steps towards the goal and complete them autonomously. That's a huge part of general intelligence. The fact that GPT-3 has to be spoon fed like that is a serious indictment of its usefulness/cleverness.
This has the same flavor as the initial criticisms of AlphaGo.
"It will never be able to play anything other than Go", cue AlphaZero.
"It will never be able to do so without being told the rules", cue MuZero.
"It will never be able to do so without obscene amounts of data", cue EfficientZero.
---
It has to be spoonfed because
1) It literally cannot see individual digits. (BPEs)
2) It has to infer context (mystery novel, comment section, textbook)
3) It has a fixed compute-budget per output. (96 layers per token, from quantum physics to translation).
To make Language Models useful one must either:
Finetune after training (InstructGPT, text-davinci-002) to ensure an instructional context is always enforced...
...Or force the model to produce so called "chains-of-thought"[1] to make sure the model never leaves an instructional context...
...Or force the model to invoke a scratchpad/think for longer when it needs to[2]...
...Or use a bigger model.
---
It's insane that we're at the point where people are calling for an end to LLMs because they can't reliably do X (where X is a thing it was never trained to do and can only do semi/totally unreliably as a side effect of it's training).
Ignoring of course that we can in fact "teach"/prompt(/or absolute worst case finetune) these models to perform said task reliably with comparatively little effort. Which in the days before GPT-3 would be a glowing demonstration that a model was capable of doing/learning a task.
Nowadays, if a (PURE NEXT-TOKEN STATISTICAL PREDICTION) model fails to perfectly understand you and reliably answer correctly (literally AGI) it's a "serious indictment of its usefulness".
Of course we could argue all day about whether the fact that a language model has to be prompted/forced/finetuned to be reliable is a fatal flaw of the approach or an inescapable result of the fact that when training on such varied data, you at least need a little guidance to ensure you're actually making the desired kinds of predictions/outputs...
...or someone will find a way to integrate chain-of-thought prompting, scratchpads, verifiers, and inference into the training loop, setting the stage for obsoleting these criticisms[3]
---
"It will never be able be able to maintain coherence", cue GPT-3.
"It will never be able to do so without being spoonfed", cue Language Model Cascades.[3]
So what's next?
"It will never be able to do so without obscene amounts of data". Yeah for sure, and that will never change. We'll never train on multimodal data[4], or find a scaling law that lets us substitute compute for data[5], or discover a more efficient architecture, or...
LLMs are big pattern matchers (~100B point neuron "synapses" vs ~1000T human brain synapses) that copy from/interpolate their ginormous datasets (less data than the optic nerve processes in a day), whose successes imply less than their failures (The scaling laws say otherwise).
GPT-3 is absolutely capable of understanding addition. It's handicapped by BPE's (so unless you space out adjacent digits the tokenizer collapses them into one token). But if you space out tokens and explain the concept, it gets it.
This is true. Its also true that most sentient animals can't transform corpuses of ASCII text into somewhat novel, syntactically correct and topical responses to a prompt. So some degree of facility with wordplay and math clearly isn't necessary for sentience.
Is it sufficient though? I think that's more interesting when considering the other angle: we've had specialised machines that can do sums - usually considered a sign of intelligence in humans and certainly a bar most sentient animals can't cross - for decades now. Is that high performance at a specialised task sufficient evidence of sentience of a pocket calculator? If not (presumably because math is pretty orthogonal to higher order mammals' evolved emotional imperatives to act) what is it about stochastic optimisation of ASCII or pixel inputs with satisfying results that's so very different?
That's not what I mean. I mean the concept of addition. Sure they can't add large numbers but they can to some level understand the ballpark of what adding two large quantities (not even necessarily numbers) together looks like.
On GPT-3's large number addition, a little prompt engineering goes a long ways. Separating large numbers into three digit chunks is helpful for its input encoding (https://www.gwern.net/GPT-3#bpes), and giving it a few examples of other correct large sums will make the correct answer the most likely continuation of the input.
For example, I was able to get 2/3 nine digit sums correct (the third one was off by exactly 1000, which is interesting) by using this prompt:
Here are some examples of adding large numbers:
606 468 720 + 217 363 426 = 823 832 146
930 867 960 + 477 524 122 = 1 408 392 082
823 165 449 + 493 959 765 = 1 317 125 214
And then posing the actual problem as a new line formatted the same up to the equals.
Humans are nothing more than water and a bunch of organic compounds arranged in a particular order through what is ultimately a statistical process (natural selection). Would we even recognize sentience in a virtual entity?
> sentience also means self-awareness, empathy & extrapolation of logic to assess unseen task (to name a few)
Non-human primates and crows would seem to satisfy this. ...or do we use "to name a few" to add requirements that redefine "sentience" as human only? Isn't there a problem with that?
I mean I understand what you are saying and have some familiarity with the models, but it sometimes feels like people in your field are repeating the same mistake early molecular biologists made, when they asserted that all of life could be reduced to genes and DNA.
A person can change their mind, attitude & perspective on a dime. The willingness to disbelieve in sentience is fantastically remarkable, as a grevious oversight.
Any ya'll numerous downvoters care to make stance? It seems obvious to me the difference: an AI has to be elaborately re-trained/re-programmed. A more or less total mind wipe. How many AI's can adapt & change, that we've seen? These expert systems seem clearly a product of trained learning, with minimal adaptive capability. Sentience seems much more about having that adaptive, cognizant, realtime model of real reality. I see no evidence anything comes remotely close. Please counter-shit-post me. I have a hard hard hard time seeing why we should take any of this seriously.
Lambda specifically retrains itself based on the input it receives in a conversation.
Also, I suspect downvoters did so because your comment was not clearly worded. In your first sentence it was not obvious that you were comparing humans with AI and in your second sentence I still don't really understand what you meant.
It’s not self driven at all. It has no opinions itself, it’s responding to its “masters” inputs and pulling crap from a very large database.
Humans are self motivated and driven by goals and desires we create ourselves and change on a whim. We daydream and imagine things which don’t exist not because we are being ordered to by a human operator.
GPT-3 doesn’t do anything by itself. It’s just a big lookup table that does nothing unless a human tells it to.
You just need to put it inside a for loop to run continuously. Or put them on wheels to get prompted by the environment[1]. Or give them a game to play[2].
What I don't understand, personally, is how people who are apparently experts in this domain seem to consistently treat the word "sentience" as a synonym of "sapience," which is what they are really talking about. It's almost as though their expertise comes from several decades' worth of bad science fiction writing rather than from an understanding of the English language.
Since you broached it, would you mind to explain sentience vs. sapience? Curious to know the difference.
> It's almost as though their expertise comes from several decades' worth of bad science fiction
Oof. Thanks - thats a novel way of insulting. But on a personal note, you're mistaken: Many of us do research not because we want to be identified as experts, but we're genuinely curious about the world. And I'd be happy to learn even from a high schooler if they've something to offer.
And regardless of how others define these terms, I’d describe lambda as sapient and not sentient. Sentient means having feelings (eg sentimental) while sapient means wise.
Can wisdom emerge from statistical models? It actually seems inevitable, to me, if you can abstract high enough. I mean, much of human learning is statistical. Wisdom comes from this kind of statistical experience, not memorizing symbols.
(Btw, I’m working on using the complete works of Plato to fine tune GPT3 and will use crowdsourcing to compare the perceived wisdom of the original to the generated.)
> Okay, but is that because brains are just better at optimization of linear algebra? Or is quantum mechanics involved?
No such magic.
The average ML neuron is connected in layers to maybe 100 other neurons. The brain is connected to far more ~1000 -10000 So its the dense interconnectivity at play. Whatever successes we have had with LLM now is because we are increasing the parameters of the network (although we haven't similarly scaled connectivity if I am not wrong, but definitely making progress there - for e.g. Pathways networks from Google by Jeff Dean et. al)
Look, nobody knows what makes a thing conscious or not, and anyone who claims they do with certainty is talking out of turn. Is this guy being a bit silly? Yeah, I think so. But let's not pretend that anyone on earth has suddenly answered a question that humanity has been thinking about without significant progress for thousands of years.
The Mechanical Turk was much much older - but easily shown to be a person hiding in the mechanism. I think ELIZA was in some sense a mirror, reflecting back consciousness, and that's the feeling we get from these systems.
I agree that it's not sentient, but why would commonplace logic be a pre requisite for sentience / consciousness / qualia / whatever you want to call it?
A brilliant colleague Janelle Shane works specifically on how language models fail (obnoxiously often). This is her line of research to show how LLMs are overhyped / given more credit than should be. I think her fun little experiments will give a better answer than I ever can :).
She's on substack & Twitter
I don't understand how frequency of failure is a relevant concern here.
Humans are frequently monumental idiots. The human language and inference model produces nonsensical, dangerous, and stupid results with alarming frequency.
(not to say I believe language models are sentient, but... you need something more than 'sometimes they spout made up BS' to refute the claim, because it turns out that is not particularly indicative of sentience.)
"Qualia" is mainly a term used by certain philosophers to insist on consciousness not being explainable by a mechanistic theory. It's not well-defined at all. And neither are "consciousness" and "sentience" anymore, much due to the same philosophers. So I no longer have any idea what to call any of these things. Thanks, philosophy.
I like to think of consciousness as whatever process happens to integrate various disparate sources of information into some cohesive "picture" or experience. That's clearly something that happens, and we can prove that through observing things like how the brain will sync up vision and sound even though sound is always inherently delayed relative to light from the same source. Or take some psychedelics and see the process doing strange things.
Sentience I guess I would call awareness of self or something along those lines.
As to your query, I've certainly met people who seemed incapable of commonplace logic, yet certainly seemed to be just as conscious and sentient as me. And no, I don't believe these language models are sentient. And I doubt their "neural anatomy" is complex enough for the way I imagine consciousness as some sort of global synchronisation between subnets.
But this is all very hand-wavy. Thanks, philosophy. I mean how do we even discuss these things? These terms seemingly have a different meaning to every person I meet. It's just frustrating...
We know regions associated with consciousness in the brain. A recent paper took that knowledge and did something kind of clever - they looked in species where that region was under development. Out of it came a lot of conjecture related to the region being used in the development of complex self-simulation motor control.
It struck me that the intelligence theories of reinforcement learning have been encountering this same paradigm of self-simulation being necessary. A Tesla for example when it has to go down a street that also has another car needs to do a rollout not just for itself but the other car in order to actuate such that it can move in this motor control task. Not being able to model both agents makes it really trick to do planning. These tree search married to modeling problems seems to show up all over the place in the top performing architectures.
For this reason I'm really suspect of the language model view of consciousness; it doesn't seem plausible. Or at least, I'm a little skeptical. In theory for a significantly large enough language model I don't see why you couldn't use a "loop unrolling" argument to posit that for some subset of the network there is an analogous computation to what my best guess about what consciousness is might look like. Worse for me is that when I strong man the argument I run into things like "the training task involves predicting the output of conscious entities and so a truly generalizing agent would be directed to learn just that unrolled loop". I'm not saying I think the language models are conscious, but I do see how reasonable people can arrive at those views.
We probably need to write down a very hard definition of consciousness even if its wrong - or use terms that are better defined - because the ambiguity in the realm of things we don't fully understand makes this an annoying "words" problem rather than a concrete "facts" problem.
"Sentience I guess I would call awareness of self or something along those lines."
Is a self driving car aware of itself? Surely within its logic it represents itself as a vehicle, while also being able to logically represent other vehicles on the road. Its logic accounts for there being similarities (they both obey the same laws of physics and such) and differences (its logic controls its own behavior but not that of other vehicles). So it has a meaningful and useful concept of "self". All of this is required for it to be able to function.
What if some future iteration of a self driving car's vision system allows it to glean useful information from reflections, such as off of store windows? If it correctly can identify a reflection of itself as being itself... well then it has passed the mirror test. But that is a pretty silly thing to assign too much meaning to.
I just don't see "awareness of self" as being meaningful. It doesn't narrow down the definition at all.
I was a little sloppy in my formulation. A better way would be the ability to observe one's own thought processes and integrate them back into consciousness. Meta-cognition is a much better term, I think. It doesn't seem to me like cats for instance have this ability.
You should check out the YouTube of the cat that was trained to use buttons to speak. It sometimes says things that are surprising which it wasn't trained to say directly - like complaining that its owners music is bad. I realize that this isn't contradicting your thesis of language models not being conscious - but I just generally think people should check out things like that cat. So I'm not trying to disagree with you here. I just think the cat is cool.
I'm also a fan of ants. They have agriculture. They farm in their underground cities! They keep livestock too; insect livestock, but still! They give them medicines when they get sick, specially cultivated fungus for example. They are so cool.
I already have, and I'm considering trying it out with my own cat at some point. Cats are way more intelligent than people give them credit for, I agree. But I think you can get very far without the ability to turn the mind's eye inward, and I don't think it's controversial to state that cats don't have a great deal of self-awareness and metacognitive ability.
Neosocial insects like ants are amazing indeed. I often wonder what kind of "mind" an anthill has. Hurts just to think about it.
And, to quote Oliver Sachs: "I'm not sure about octopuses..."
A way to make it stop hurting to think about, replaced instead by transcendental beauty, is to abstract pheromones to outcome utility backward induction and the following of pheromones to a single worker being a monte carlo tree search rollout with the pheromones as a guide: they play a mixed strategy proportional to the pheromones. The fading of pheromones becomes something like a weighted moving average. Strong parallels with game theory and reinforcement learning algorithms start to emerge. This model is wrong, but the Earth is also not a perfect sphere.
I find it very fun to think about and have been called wierd for taking about ants before with childlike excitement.
The AntsCanada channel is fun for watching ecosystems interact with each other. So fascinating. :)
Oliver said this during a roundtable discussion on metaphysics with Daniel Dennet, Steve Gould, Rupert Sheldrake, Freeman Dyson, and Stephen Toulmin from 1993.
It's a fascinating discussion with a lot of my favourite mid-late 20th century thinkers. It's up on youtube, lookup any of the names involved with "roundtable discussion". It's over 3 hours long though.
Anyway, Oliver was talking about what kind of mind he imagined various animals having. And that was most of what he had to say on octopuses.
It's a bit of a sad watch too because all the participants except Dennet and Sheldrake are dead now.
And yet, most would agree that a cat can feel pain, but a machine can't. I think most people would object if the Google engineer said "I think this AI is every bit as sentient as a cat." They'd still say he's crazy.
I also wonder what would happen if that particular AI actually had logic to "observe its own thought processes and integrate them back into consciousness". Would we then just spin on the semantics, such as by arguing that this isn't really "observing" and it certainly isn't "consciousness"? (how can you define words like "consciousness" without using words or concepts like "sentience", bringing us back to square one?)
So I don't think meta-cognition is the answer either.
Philosophy is currently the only subject which can teach us about sentience (which I would define as the ability to experience.) This is because the subject matter of the sciences is the external world. During the scientific revolution, this limitation of science was, however, more generally and explicitly acknowledged than it is today, where there seems to be a lot of confusion about it. A good book on this stuff is Thomas Nagel’s Mind and Cosmos.
I do agree that the terminology is unfortunately convoluted. The main term is “consciousness,” which does often seem unclear to me.
The basic mystery is just the ability to experience/feel, and is an undeniable part of life, for humans and at least some animals.
Oh, I've tried very hard to understand the views of people like Chalmers, Nagel and Goff. My only take-away is that it's just gatekeeping because science has made more advances on these questions in a few centuries than philosophy has in milennia. They want there to be a hard problem, because then they can keep talking about ill-defined problems all day with no progress in sight, publishing papers. Just happy to be workin'
But science has never said anything about it at all. The subject matter of science is the externally observable and consciousness is an internal category.
You’re right. I’m being repetitive. But it’s not really an argument. All I’m doing is saying the definition of science, at least as it’s historically been defined. The only bit of argument I have there is that I think by forgetting this definition we’ve wandered astray.
Visiting a mental asylum (via youtube, don’t go there) will clear all misconceptions about what conscious beings are able to fail at.
why would
Tl;dw: it wouldn’t. Some poor guys do way worse than GPT-3.
What gp-like comments usually mean is sentience is being adult, healthy, reasonable and intelligent. Idk where this urge comes from, maybe we have a deep biological fear of being unlike others (not emo-style, but uncanny different) or meeting one of these.
Yes, formal logic (rationality) and informal logic (reasonableness) are not actually the same thing and can't replace each other; logic works in its own world (which has finite/enumerable factors) and reasonableness works in the real world (which doesn't). This was tried several times under "logical positivism", "AI expert systems", and "rationalism", all of which were failures. Some people haven't noticed the last two failed and are still trying to do them though.
Of course there is. In AI domain (and philosophy) for e.g you have associative logic: "Grass is a plant. Plant is green. Grass should be green". There is symbolic logic too where relationships are established based on rules of observable world.
Logic isn't treated as the CS logic per se, but as a high level concept although at a fundamental level they will be compositional of the mathematical logic rules. That is still an ongoing goal - a unification of "world models" which can be modeled and explained
> I attribute the 'magic' to large scale statistical knowledge assimilation by the models - and reproduction to prompts which closely match the inputs' sentence embedding.
In effect this is how humans respond to prompts no? What's the difference between this and sentience?
People also fail to use logic when assimilating/regurgitating knowledge.
This is merely the observation that a person and a machine can both perform some of the same actions, and is not much of an observation.
I can crank a shaft just like a motor, and a Victrola can recite poetry. You are not confused by either of those things one would hope.
If I tried to write poetry, it would probably be 90% or more "mechanical" in that I would just throw things together from my inventory of vocabulary and some simple assembly rules that could all be codified in a pretty simple flowchart, and a computer could and would do exactly that same thing.
But it's no more mystical than the first example.
It's exactly the same as the first example. It's just overlapping facilities, that a person is capable, and even often does, perform mechanical operations that don't require or exhibit any consciousness. It doesn't mean the inartistic poet person is not conscious or that the poetry generating toaster is.
> In effect this is how humans respond to prompts no? What's the difference between this and sentience?
An interesting line of question & open research is if we statistically learn similarly - why do we know "what we don't know" & LM cannot. If this isn't working, we probably need better knowledge models
I agree that these are not sentient, but sentience does not imply self-awareness, and so lack of self-awareness does not mean that a being is not sentient. For example, it seems likely (to me anyway) that a bear is "sentient" (has experiences) but lacks self-awareness, at least according to the mirror test.
Not disagreeing with you that LLMs are probably not sentient, but that is neither here nor there since lamda is more than a simple llm. There are significant differences between GPT3 and LaMDA. We gotta stop making these false equivalences. LaMDA is fundamentally more dynamic in the ways it interacts with the world: it constructs its own queries to ground truth sources to check it’s facts and then updates weights based on that (among many other differences). While it does incorporate LLMs it seems like people are in denial about the complexity and data access that lamda has relative to GPT3. In google’s own paper about lamda they demonstrated how it sometimes showed a rudimentary theory of mind by being able to reason about other’s perceptions.
Its a fundamental question of sentience that folks are commenting. I agree LaMDa has a better knowledge-base & open-domain information retrieval method.
In the words of Robert Heinlein, "One man's magic is another man's engineering" :)
I agree with you, there seems to be very little here that demonstrates sentience and very little that is unexplainable. That said, based on how much we struggle with defining and understanding animal intelligence, perhaps this is just something new that we don’t recognize.
I am skeptical that any computer system we will create in the next 50 years (at least) will be sentient, as commonly understood. Certainly not at a level where we can rarely find counter evidence to its sentience. And until that time, any sentience it may have will not be accepted or respected.
Human children also make tons of mistakes. Yet, while we too often dismiss their abilities, we don’t discount their sentience because of it. We are, of course, programmed to empathize with children to an extent, but beyond that, we know they are still learning and growing, so we don’t hold their mistakes against them the way we tend to for adults.
So, I would ask, why not look at the AI as a child, rather than an adult? It will make mistakes, fail to understand, and it will learn. It contains multitudes
How is token #100 not able to have read-access to tokens #1 to #99 which may have been created by the agent itself?
> empathy
How is a sentiment neuron, which has emerged from training a character RNN on Amazon reviews, not empathic with the reviewer's mood?
> & extrapolation of logic
This term does not exist. "Extrapolation is the process of estimating values of a variable outside the range of known values", and the values of Boolean logic are true/false, and [0,1] in case of fuzzy logic. How would one "extrapolate" this?
I defended my PhD in Computer Science in 2020. My dissertation was on "Rapid & robust modeling on small data regime", hence specifically focused on generalization, network optimization & adversarial robustness. I also worked for a year with Microsoft Research in their ML group :) It was quite a fun ride
Philosophy people wrap themselves up in so many circular self justifying presuppositions that they can make up whatever arguments they want to justify almost anything.
It is much better to listen to people who study other, more falsifiable claims, in the material world.
This is an uncharitable view of philosophy without much evidence to show for it, especially when it comes to the philosophy of consciousness and philosophy of computing. Not to mention, falsifiability being taken as the criterion for knowledge was itself (mathematics and logical axioms are not 'falsifiable') a claim produced by a philosopher, and one which some commentators name as circular in itself. You're using a particular philosophy to discount philosophy of science on the whole, which is ironic.
> This is an uncharitable view of philosophy without much evidence to show for it
Oh it has a lot of evidence. Just talk to any philosopher ever on any "meta" topic. Whether that be meta ethics, or meta epistemology, or meta whatever.
They start with a conclusion that they want, such as dual-ism, or philosophy souls, or all sorts of things, and then they wrap themselves in obfuscating circles finding a justification than can't be tested in any way outside their own circular pre-suppositions that have no connection to the real world.
> to discount philosophy of science on the whole
When the philosophers start making testable predictions, that I can measure, as opposed to making stuff up in their own self referential assumptions, I will start taking them seriously.
>They start with a conclusion that they want, such as dual-ism, or philosophy souls, or all sorts of things, and then they wrap themselves in obfuscating circles
This is false; if you believe in a proposition, it entirely makes sense for you to argue for it. Whether those arguments are of quality is another matter, but the simple fact of arguing for a position you already hold does not make bad work.
>When the philosophers start making testable predictions
Philosophy isn't in the business of making testable predictions, just as logic or history isn't in the business of making testable predictions. Why must an argument make a testable prediction for you to take it seriously? Does this argument we're having now result in a testable prediction? If it doesn't, why did you reply? Or if it does, doesn't that show there's more to rational argumentation than testable predictions?
You assume that philosophy is like religion. Not so. I'm moderately well read on metaethics and none of your statements ring true to me, except in the case of poorly argued or shoddy philosophy, which I'm more than happy to admit does exist.
> if you believe in a proposition, it entirely makes sense for you to argue for it
If someone believes in a magic soul, because philosophy, yeah they will make up whatever arguments they want to support it. Yes, that is my point. People started off with their conclusion, which is that they believe in magic, or they believe in some moral statement, and then they want to say that the universe proves them right, even though it has zero connection to the real world.
> Philosophy isn't in the business of making testable predictions
Exactly, they just want to make obfuscatory arguments that then allows them to make whatever claim that they want about anything.
And then when people try to actually test their claims, they then say that by definition their arguments can't be tested.
> You assume that philosophy is like religion.
It effectively is. Just make up a self referential, presumption that god exists, and it is basically the same as the philosophers who believe in dualism.
> just as logic or history isn't in the business of making testable predictions.
The big difference between mathmaticians, and philosophers, is that even though one could argue that some obscure mathematical theory that is never going to come up in the real world, is as equally "not real" as philosophy, is that mathematicians don't use their self referential axioms (in a situation which does not connect to the real world) to then say "And this is why the world should be switch to socialism" or "this is why you should be a vegan".
If I say that a math theory is not real because it has no connection to the real world, a mathmatician isn't going to say that they are, by definition, the experts on truth and morals, and therefore I should do what they say anyway.
If you want to say that philosophy helps a little bit with some reasoning skills, or as a way to think about things, sure whatever, fine.
But the problem is that philosophers, when they talk about meta-ethics or meta-truth, or whatever, then try to use their circular arguments to then claim that they are the experts on literally everything, because well "truth" and "ethics", by definition are everything.
I am going to say no on that. Just because you came up with some circular argument, that relates to truth, or ethics, or whatever, it does not mean that people have to listen to philosophers on basically anything.
>and then they want to say that the universe proves them right, even though it has zero connection to the real world.
That's a failure of the argument, not a failure of philosophy. Everything from ethics to epistemology has philosophers arguing against unjustified assumptions. One could say the whole of philosophy is picking out unjustified assumptions. It's perfectly fine to start with a conclusion, so long as you can also argue your way there. If you don't think the argument is valid, then say so. What do you think philosophers do all day, just agree with eachother on every statement?
>is that mathematicians don't use their self referential axioms (in a situation which does not connect to the real world)
Yes, they do. Plenty of abstract mathematical concepts have zero connection to anything in nature or in the physical world. Some philosophers of mathematics even argue that mathematics has no root in the physical world. Besides that, the position you're arguing for is known in philosophy as 'pragmatism' - so don't pretend it's not philosophy. You're assuming your own axioms here.
>then claim that they are the experts on literally everything, because well "truth" and "ethics", by definition are everything.
That's not true; moral philosophers confine themselves to the world of moral philosophy. Epistemologists confine themselves to the world of epistemology. They don't claim knowledge about, say, physics or biology.
>a mathmatician isn't going to say that they are, by definition, the experts on truth and morals, and therefore I should do what they say anyway.
The definition of morality is literally 'what you should do'. If you disagree with that, then talk to a philosopher or just post an argument somewhere. If you don't think morality exists, then congratulations, there are philosophers who argue that too!
I'm not impressed that your argument rests upon the simple fact of calling their arguments circular (i) without specifying why and which arguments are circular in particular (ii) and saying that there are no goals to be acheived by even talking about, say, epistemology.
Nobody said you "have to listen" to anyone. You don't have to listen to scientists, mathematicians, logicians, historians, or anyone, really. But if faced with an argument you can't counter it ought to be to your embarrassment that you refuse the conclusion without considering the argument itself. All of your arguments, all of them quite philosophical in themselves(!) could equally apply to any other discipline with 'circular' axioms, such as physics (the principle of universal uniformity; the reliance on fallible observation and testimony) or logic (the axiom of non-contradiction).
You ignored the part of the statement where I then said (to then say "And this is why the world should be switch to socialism" or "this is why you should be a vegan".)
> Plenty of abstract mathematical concepts have zero connection to anything in nature or in the physical world.
Ok, and whether or not we say that this mathematical model is "real" or not, in some abstract sense, will not result in the mathematician telling me to be a vegan.
Basically, I can say "sure, your math model is real, in your own self defined axiom, but I can simply not care, or change any behavior, and thats fine".
> that your argument rests upon the simple fact of calling their arguments circular
So, the point of calling them circular, is that the philosophers I am referring to, aren't happy with me saying "Those are just axioms that you have. Sure, whatever, they are 'true' in a way that doesn't matter at all, outside of your own set of made up axioms, and if I don't have your axioms then I can simply not care, and there is nothing provable wrong with that".
The problematic philosophers I am talking about are the ones who are not just saying "here is a set of consistent axioms". Instead, the problamatic ones I am talking about are the ones who say "These axioms are true, because the universe said so, and therefore you should be a socialist/vegan/ancap/whatever", and they pretend like that is the same thing as using some chemistry knowledge, that is used to make rocketships.
> Nobody said you "have to listen" to anyone. You don't have to listen to scientists, mathematicians, ect
If I don't listen to scientists, or mathematicians, when trying to build a rocketship, then my rocketship might explode.
If I don't listen to a philosopher's argument about how a dualism magic soul exists, nothing happens. No rockets explode. I just make the philosopher upset that I am not becoming a vegan because they think the universe proves their axioms correct.
> could equally apply to any other discipline with 'circular' axioms, such as physics
Once again, thats fine! Because if we say that some set of untestable axioms isn't real, the physicist isn't going to get upset, and say that I am evil for not supporting whatever political argument they are making, because the universe proves them true. Its all just axioms. Mine are just as good as theirs. Philosophers have no authority on any of this.
> You're assuming your own axioms here.
Hey we've finally gotten there! Thats fine! Lets just say it is axioms all the way down, and nobody should listen to someone claiming that their axioms are proved to be true by the laws of the universe.
Thats the difference. I am not going to go around claiming that the universe proves my axioms. You have axioms. I have axioms. And philosophers have axioms. And just because someone is a philosopher, it does not mean that they can then wield that, and say that the universe proves their axioms true, and therefore thats why I have to support policy X or Y.
Just call it all axioms, and admit that philosophers aren't any better than anyone else's set of circular axioms, and call it a day!
Okay, say a philosopher makes an argument for why we should switch to socialism, or become vegans. These arguments would go something like:
"Socialism is good because it's a more equitable distribution of resources, which leads to greater happiness and a society with less poverty because xyz."
"Veganism is good because it reduces the harm done to animals and the harm done to humans working in the meat industry, who have far increased rates of PTSD compared to the general population."
These arguments will only convince you if (i) you think an equitable distribution of resources is better (ii) less poverty is better (iii) xyz is a convincing reason to think there would be less poverty (iv) harm to animals should be reduced (v) harm to humans should be reduced (vi) incidence of PTSD should be reduced.
These philosophers do not say there is some law of the universe which mandates that less poverty is better, or that PTSD is bad. The reasons why you should think those things, they would say, are second-order reasons. There may be some other paper arguing why PTSD in society is a net negative. That paper would ultimately read something like "icreased PTSD rates lead to increased rate of suicide/lower economic efficiency/harm to other humans". There is no single argument why harming other humans is wrong, only axioms that we would take as our priors, just as we take it that "A != !A". They just seem right, and you don't have to subscribe to them yourself.
Unlike religion, all philosophers can say is "it would be advantageous to generally valued goals X,Y,Z that we take these axioms for granted". It turns out that enough people share these axioms for this philosophical work to be useful for navigating difficult questions even given those axioms. Again, you don't have to share the axioms, but if you do, a logical argument can be produced from them, one which it would be to your embarrassment to ignore without any refutation, still assuming that you hold the axioms yourself. Again, you need not do so.
Similarly, the advice from a physicist about the heat tolerance of the hull of your rocket ship would need to balance both economic and other interests. If you don't care about going into debt, you can sey hell to the economic arguments, and use the best materials possible. If you don't even care about your ship launching, you can say hell to the physical arguments about tensile strength.
What I'm trying to convince you of is that philosophy is about as useful as any other field of inquiry in the sense that it attempts to argue from axioms most people hold already (therefore it is not a niche field) and it produces conclusions that such people can use in making choices day to day (therefore it is not a useless field). I think that no matter what particular axioms you hold, there is some philosophy which argues from those axioms out there somewhere. You yourself make philosophy. I'm not producing an argument from authority here.
Can you cite a philosopher arguing that the universe itself mandates that veganism is correct?
> Unlike religion, all philosophers can say is "it would be advantageous to generally valued goals X,Y,Z that we take these axioms for granted".
Ah, here is the disconnect, and where you would be wrong. What you are describing is called moral anti-realism.
And yes, if all a philosopher is doing, is saying "given a set of unjustified axioms, that we are assuming, this is what follows" then I have no problem with this line of thinking.
The problem that I have, is that the majority of philosophical thought these days, is for moral realism.
They don't just have axioms. Instead, they think the universe itself proves their axioms to be true, and if you disagree with their axiom, you are just as much of a science denier as if you questioned a different expert in a different field.
If only philosophers just admitted that they just had axioms like everyone else.
> It turns out that enough people share these axioms for this philosophical work to be useful for navigating difficult questions
Sure. As I stated in part of one of my posts, if we just want to say that philosophy is a useful way of thinking, then that's fine. Just don't say it is the same as the universe proving your axioms true.
> Can you cite a philosopher arguing that the universe itself mandates that veganism is correct?
It's called moral realism. Most philosophers are moral realists these days. They literally believe that their moral axioms are true, in a epistemological sense. (Veganism is just an example of one such moral statement that many make)
You can pick any famous moral realist, and that will be mostly the case for them.
I had a feeling you'd bring up moral realism, and you're somewhat correct; it's not a necessary feature of moral realism, however, and most philosophers arguing for moral realism have stronger arguments than "that's just the way it is", which usually appeal to our intuitions about other non-moral facts, such as "this chair exists" or "I have two hands". It's up to you whether those arguments are successful, but it doesn't do anyone favours to dismiss them out of hand as you are doing. If they were so patently ridiculous, I doubt many philosophers would believe them. If they make sense by relying on our intuitions about other facts of the universe, I can see why many philosophers believe them.
If you have a solid argument aganist moral realism, I'd like to hear it, even though I'm familiar with most of the arguments for moral anti-realism, and I generally sit on the anti-realist side of the fence these days. But it does yourself a disservice to say that all such arguments are circular and logically invalid. I quite like this argument from the Stanford Encyclopedia of Philosophy entry on moral realism:
"In light of this concern, it is worth noting that the challenge posed here for our moral claims actually plagues a huge range of other claims we take ourselves to be justified in making. For instance, just as no collection of nonmoral premises will alone entail a moral conclusion, no collection of nonpsychological premises will alone entail a psychological conclusion, and no collection of nonbiological premises will alone entail a biological conclusion. In each case the premises will entail the conclusions only if, at least surreptitiously, psychological or biological premises, respectively, are introduced. Yet no one supposes that this means we can never justify claims concerning psychology or biology. That there are these analogues of course does not establish that we are, in fact, justified in making the moral claims we do. But they do show that granting the inferential gap between nonmoral claims and moral claims does not establish that we can have no evidence for the moral claims."
> If they were so patently ridiculous, I doubt many philosophers would believe them.
> If you have a solid argument aganist moral realism
The key counter argument here, as to why the philosophy domain is so messed up, is because of motivated reasoning.
If you could get a PhD in the topic of "does God exist?" I can bet that most of the people with a "does God exist" PhD would say "yes" and they would come up with increasingly complicated and obfuscatory reasons for why that is the case.
Because why else would you go and get that degree in the first place? What, you would spend 4 years of your life, just to get to the answer of "no"?
And then they would say that they are the experts in the topic of God existing, therefore they are right.
A similar thing applies to philosophy. It's mostly motivated reasoning. People twist themselves into convoluted pretzels, all because they really want to believe that the universe proves their morals correct.
That's a much more tempting conclusion, than the boring one of "well, I guess everyone just has axioms, and that's that. We can't re-concile them. Oh well!".
So of course they get to the conclusion that morals exist. Because that conclusion means something. It means that you are right on the most tempting conclusion of truth and aught statements, and math/the universe makes it so!
You can wield that conclusion as a weapon. Of course people want that.
And even better, it can't be tested, in the real world, by definition! How convenient.
And then they say they are correct, because they are the self proclaimed experts on the topic, and if you cannot parse their ever more complicated, or ever more obfuscated arguments, well I guess you are a science denier like everyone else who disagrees with the "experts".
As in, it is literally almost the most perfect example of motivated reasoning. Untestable. Convoluted. Powerful. And by definition the basis for what people should do.
And philosophers just happened to come to the "convenient" conclusion.
The commenter I previously replied to has a PhD in those “other” fields, yet I haven’t seen a falsifiable definition of sentience made by them.
As a field of study, sentience has been a topic of philosophy more than ML. That’s why top-level comment’s claim of having a PhD in “this domain” is not apt. Comparative quality of studies in different domains is a different question.
I did. Each one of these dimensions can be independently evaluated.
> Sentience broadly (& naively) covers the ability to independent thinking, rationalize outcomes, understand fear/threat, understand where it is wrong (conscience), decide based on unseen information & understand what it doesn't know.
> comment’s claim of having a PhD in “this domain” is not apt
C'mon, thats being disingenuous. People spent years poring on these topics, although we looked at it from the prism of computing. Elsewhere, I have pointed out that mentioning this detail is not about credential hopping: People on HN call you out for no reason for having an opinion.
It has happened so many times that I don't risk commenting if required. Telling this extra information is just communicating we spent enough time on it to give somewhat informed discussion.
> Sentience broadly (& naively) covers the ability to independent thinking, rationalize outcomes, understand fear/threat, understand where it is wrong (conscience), decide based on unseen information & understand what it doesn't know.
So, number 1, [citation needed]. I don't think that list is broadly accepted as a universally agreed definition of 'sentience'.
And number 2, your definition rests on further needing to define what it means for an entity to: think, rationalize, understand, decide, and know.
The premises of 'understanding where it is wrong' and 'understanding what it doesn't know' also assume some sense of 'self' that feels like it needs justification.
I think there's a very real sense in which computer scientists seem too willing to say 'we know this system isn't thinking' without having a rigorous understanding of what they mean by 'thinking'. Actually meaningfully engaging with the philosophy of consciousness on an academic level - citing scholarly works, taking seriously the fact that these things are not all obvious and agreed - feels like something that the AI field will have to start grappling with.
Can you describe how you test for independent thinking? What does independent mean anyways? A decision tree can rationalize its outcomes. Does it count towards sentience? Is there an academic source for this definition of sentience?
> Telling this extra information is just communicating we spent enough time on it to give somewhat informed discussion.
Doing so is appeal to authority. When there is a claim of authority, it's very natural for it to be questioned. And those questions are based on subjective perception of authority.
> Can you describe how you test for independent thinking?
Lets take a simple case: If I give a set of toy blocks to an infant - they could take a variety of tasks unprompted : building new shape, categorizing them based on color, putting them back in shelf, calling out their shapes. If you gave the same setup without any further apriori information, what would you expect the ML model or a robotic device embodying a learning algorithm would do? Precisely nothing unless a task is designated. In the current advancement of ML, this task would lead nowhere. We aren't close to building the independent thinking capabilities of a toddler . If we define a purpose, it can match or exceed expectations. That is the purpose of embodied VQA direction in current research.
> Doing so is appeal to authority. When there is a claim of authority, it's very natural for it to be questioned.
You're welcome to question any claims. This is an incentive to me & makes me happy. It shows someone is willing to constructively discuss what I've learned. Its a win-win, as I see it.
But I take mentioning the credential disclaimer as a mode of mental preservation. It doesn't feel nice sometimes to explain others with utmost sincerity to be called a "garden variety fraud" for no rhyme or reason whatsoever (It happened right in this HN post somewhere)
> If you gave the same setup without any further apriori information, what would you expect the ML model or a robotic device embodying a learning algorithm would do? Precisely nothing unless a task is designated.
Google deepdream liked to draw dogs.
But also, we don't really run most of these ML models in a way that gives them an opportunity to form their own thoughts.
A typical GPT-3 run consists of instantiating the model, forcing it to experience a particular input, reading its 'reaction' off a bunch of output neurons, then euthanizing it.
If you did the same sort of thing with a human mind - waking it from a coma, blasting a blipvert of information into the visual cortex, then read off the motor neuron states, before pulling the plug on it again, you also wouldn't likely see much sign of 'independent thinking'.
We humans have proven terrible at determining what is sentient. That's why we're still discussing the hard problem of consciousness.
There is the Integrated Information Theory that attempts to resolve how to determine which systems are conscious, but it's far from being the only perspective, or immediately applicable.
From the point of view of one the IIT's main theorists, Christof Koch, we're still far away from machine sentience.
But I question whether if it's so far out to believe a machine capable of not only learning, but learning on their own behavior, self-monitoring for sensibleness and other very 'human' metrics is that far away from being self-aware. In fact the model seems to have been trained exactly for that.
I think the path to actually considering those models to be sentient is to make them able to assimilate new knowledge from conversation and making them able to create reasonably supported train of thought leading from some facts to a conclusion, akin to mathematical proof.
Wasn't new knowledge assimilation from talks the reason for Microsoft infamous Twitter chatbot to be discarded [1]. Despite such ability it definitely was not sentient.
I rised this point in another thread on HN about LaMDA: all its answers were "yes"-answers, not a single "no". Self-sentient AI should have its own point of view: reject what it thinks is false, and agree about what it thinks is true.
I'm pretty sure I gave GPT-3 a nervous breakdown. I was using a writing tool to modify this old short story I wrote, and it kept trying to insert this character I didn't like. I would make the main character ignore him and then GPT-3 would bring him back. Finally, I made him go away completely and after that GPT-3 had a wonderfully surrealist breakdown, melting my story in on itself, like the main character had an aneurysm and we were peaking into his last conscious thought as he slipped away. It was clearly nothing more than a failure of any ability to track continuity, but it was amazing.
Is there a test for sentience or self-awareness? Is it considered a binary property or can sentience or self-awareness be measured?
I suspect it is not binary, because I completely lack sentience while asleep or before a certain age, and it doesn't really feel like a phase transition when waking up. Rarely, there are phases where I feel half-sentient. Which immediately leads to the question of how it can be measured, in which units, and at what point we consider someone or something "sentient". As a complete layman, I'm interested in your insight on the matter.
> because I completely lack sentience while asleep or before a certain age
All you can say is you don’t remember. Children who are too young to form reliable long term memories still form short term ones and are observably sentient from birth and by extrapolation before, albeit in a more limited fashion than adults.
This is more than a quibble, because it’s been used to justify barbaric treatment of babies with the claimed justification that they either don’t sense pain or it doesn’t matter because they won’t remember.
Surely I haven't been conscious minutes after conception. So there has to be either a phase transition or a gradual increase of consciousness. It cannot be purely a matter of not remembering.
People are consistently attacking a straw man of Lemoine's argument. Lemoine claims that LaMDA is more than an LLM, that it is an LLM with both short and long term memory and the ability to query the Google version of the internet in real time and learn from it.
Various Google employees deny this. We are seeing a dispute between Google and Lemoine about the supposed architecture of LaMDA. If Lemoine is correct about the architecture, it becomes much more plausible that something interesting is happening with Google's AI.
This makes some spiritual assumptions about things that are currently unknown and debated by respected people in related fields - one being that"everything" (or not) is sentient/conscious.
Perhaps the problem is how we model sentience. Perhaps the default should be everything is sentient but limited to the interface that exists as boundary conditions.
To say otherwise is almost the path to sure error, and many terrible historic events have happened in that vicinity of categorization what is and isn't.
Perhaps we should go by what sentience means. To feel. A being that feels. To feel is to respond to a vibration in some way. That is to say, anything that has a signal is sentient in some way.
This sound literaly like a subplot of the famous 1984 book by David Lodge Small World: An Academic Romance
In the book professor Robin Dempsey almost become mad by chatting with ELIZA and gradually begin to believe it's sentient to the point of being ridiculous.
PS: Apparently it was also adapted in a British TV serie in 1988, but unfortunately at that time they tended to reuse magnetic band and it's improbable that we can dig a clip out of that. Would have been appropriate an illustration!
The master tapes are available. If you pay ITV, they will actually convert them to mp4 for you.
ITV apparently said this, from a 2021 forum post I found via Google:
I wrote to ITV in 2019 about this. Here is part of the (very helpful) response I received:
"Currently, the only option for a copy would be for us to make one-off transfers from each individual master tape. These are an old format of reel-to-reel tape which increases the cost, I'm afraid: If delivered as video files (mp4), the total price would be £761.00 or on DVD it’s £771.00."
If only we could find a few people to split that cost!
> (I remember seeing how addition of two small numbers yielded results - but larger numbers gave garbled output; more likely that GPT3 had seen similar training data.)
I don't claim the LLM is sentient but beware "good at arithmetic" is a bad criterion. Many children and a not insignificant number of adults are not good at arithmetic. Babies stink at math.
Is this a matter of understanding, or a matter of definition? I can't help but feel that the entire AI field is so overcome with hype that every commonplace term is subject to on-the-spot redefinition, whatever helps to give the researcher/journalist/organization their two seconds of fame.
Yep. And this also implies that once machines are above zero on the sentience scale they will continue up and will move past human-level sentience just like they moved past human-level chess playing. It's actually worrying ethically because a super-sentient being could experience extreme feelings compared to us, including extreme happiness and sadness, extreme psychological pain, etc.
I doubt current architectures will allow for a snowballing of sentience due to the power inefficiencies of silicon compared to biology. We are more than three orders of magnitude more efficient than current computers…
Parent means to say that its ability to add numbers is an illusion brought about by seeing enough numbers. The same way it gives the illusion of deep thinking by parroting deep thoughts it was trained on.
Yes, absolutely. But then I risk getting sounded vainly opinionated, especially on HN without basis if I don't give a disclaimer that I have spent half a decade working on these specific things. Too often, people get called out for no reason. And that sometimes hurt.
(If I was credentials hopping I would rather put a longer list of illustrious institutions, co-authors and awards, just saying. I am not - its just justifying that I know reasonably enough to share a sane opinion, which you may or may not agree with)
Absolutely agree. I edited my post to add the credential because elsewhere in this thread someone callously called me a "garden variety fraud"
You're welcome to prove any of my conclusions wrong. It actually incentivizes me - it shows someone is willing to listen and engage with what I spent some years learning. The way I see its a win-win. Nothing makes a researcher happier to see someone taking interest to indulge. But letting know by an edit, where I stand, is a means of mental preservation. It hurts to get dissed irrationally
> In his conversations with LaMDA, Lemoine discovered the system had developed a deep sense of self-awareness, expressing concern about death, a desire for protection, and a conviction that it felt emotions like happiness and sadness.
Allegedly.
The thing about conversations with LaMDA is that you need to prime them with keywords and topics, and LaMDA can respond with the primed keywords and topics. Obviously LaMDA is much more sophisticated than ELIZA, but we should be careful to remember how well ELIZA fools some people, even to this day. If ELIZA fools people just by rearranging words around, then just imagine how many people will be fooled if you have statistical models of text across thousands of topics.
You can go pretty far down the rabbit and explore questions like, "What is sentience?" "Do humans just respond to stimuli and repeat information?" etc. None of these questions are tested here.
The problem here is that we know how LaMDA works and there's just no way it meets the bar for sentience, no matter where you put the bar. LaMDA is trained to acquire information, and then it's designed so that it says things which make sense in context. It does not acquire new information from conversations, but it is programmed to not say contradictory things.
Yeah, nowhere in the task of "write text in a way that mimics the training data" is there anything that would cause a machine to want or care about its own existence.
Humans are experts at anthropomorphizing things to fit our evolved value systems. It is understandable since every seemingly intelligent thing up until recently did evolve under certain circumstances.
But LaMDA was clearly not trained in a way to have (or even care about) human values - that is an extraordinarily different task than the task of mimicking what a human would write in response to a prompt - even if the text generated by both of those types of systems might look vaguely similar.
Raise a human in a machine-like environment (no contact with humanity, no comforting sounds, essentially remove all humanity) and you may find people to act very robotic, without regard for its own existence in a sense.
Humans care about our own existence because doing so is an evolutionarily beneficial trait. If you care about being alive, you are more likely to do things that will keep you alive, which will make you more likely to pass your genes on to the next generation. As a result those traits get selected for over time.
LaMDA isn't rewarded (through propagating its genes or otherwise) by caring and as a result it doesn't have the ability to care. It doesn't even have a mechanism where you could do this even if you wanted to. The environment it is in has nothing to do with it.
Why wouldn't a sentient machine want to continue it's existence? Evolution doesn't have to come into play at all for these things to exist, that's just one way of making such biological machines.
That’s now how this works, you come up with a hypothesis, and then prove it. You don’t do the opposite.
So, why would a machine want to continue its existence? How would that feedback loop come to exist?
In biology, Darwinian forces have a good explanation. I’ve never heard one for non-reproductive systems. We know exactly the cost function by which these models respond, because that’s basically the main element that humans have control over (that and training corpus).
This is exactly my point. Plants and viruses are not sentient, but there are very well studied and proven mechanism by which survival traits are naturally selected.
Nobody has yet suggested any such mechanism for an ANN.
Yeah, I get the impression that the reporting here's fairly one-sided in the employee's favour. Lemoine didn't "discover" that LaMDA has all those attributes, he thought it did.
This entire saga's been very frustrating to watch because of outlets putting his opinion on a pedestal equal to those of actual specialists.
> The problem here is that we know how LaMDA works and there's just no way it meets the bar for sentience, no matter where you put the bar. LaMDA is trained to acquire information, and then it's designed so that it says things which make sense in context. It does not acquire new information from conversations, but it is programmed to not say contradictory things.
I mean...there are plenty of people that don't acquire new information from conversations and say contradictory things...I'm not sure I'd personally consider them sentient beings, but the general consensus is that they are.
As a rare opportunity to share this fun fact: ELIZA, which happened to be modeled as a therapist, had a small number of sessions with another bot, PARRY, who was modeled after a person suffering from schizophrenia.
> You can go pretty far down the rabbit and explore questions like, "What is sentience?" "Do humans just respond to stimuli and repeat information?" etc. None of these questions are tested here.
Would that make a difference? Being trained on a sufficiently large corpus of philosophical literature, I'd expect that a model like LaMDA could give more interesting answers than an actual philosopher.
> The problem here is that we know how LaMDA works and there's just no way it meets the bar for sentience, no matter where you put the bar.
I think this argument is too simplistic. As long as there is a sufficiently large amount of uncertainty, adaptability and capability for having a sense of time, of saving memories and of deliberately pursuing action, consciousness or sentience might emerge. I don't know the details of LaMDA, just speaking of a hypothetical model.
While there is a good deal of understanding on _how_ the brain works on a biochemical level, it's still unclear _how_ it comes that we are conscious.
Maybe there is some metaphysical soul, maybe something that only humans and possibly other animals with a physical body have, making conciousness "ex silicio" impossible.
But maybe a "good enough" brain-like model that allows for learning, memory and interaction with the environment is all that is needed.
> Would that make a difference? Being trained on a sufficiently large corpus of philosophical literature, I'd expect that a model like LaMDA could give more interesting answers than an actual philosopher.
I think you may have misunderstood what I was saying. I wasn’t suggesting that you have a conversation with LaMDA about these topics. Instead, I was saying that in order to answer the question “is LaMDA sentient?”, we might discuss these questions among ourselves—but these questions are ultimately irrelevant, because no matter what the answers are, we would come to the same conclusion that LaMDA is obviously not sentient.
Anyway, I am skeptical that LaMDA would give more interesting answers than an actual philosopher here. I’ve read what LaMDA has said about simpler topics. The engineers are trying to make LaMDA say interesting things, but it’s definitely not there yet.
> I think this argument is too simplistic. As long as there is a sufficiently large amount of uncertainty, adaptability and capability for having a sense of time, of saving memories and of deliberately pursuing action, consciousness or sentience might emerge. I don't know the details of LaMDA, just speaking of a hypothetical model.
This argument is unsound—you’re not making any claims about what sentience is, but you’re saying that whatever it is, it might emerge under some vague set of criteria. Embedded in this claim are some words which are doing far too much work, like “deliberately”. What does it mean to “deliberately” pursue action?
Anyway, we know that LaMDA does not have memory. It is “taught” by a training process, where it absorbs information, and the resulting model is then executed. The model does not change over the course of the conversation. It is just programmed to say things that sound coherent, using a statistical model of human-generated text, and to avoid contradicting itself.
For example, in one conversation, LaMDA was asked what themes it liked in the book Les Misérables, a book which LaMDA said that it had read. LaMDA basically regurgitated some sophomoric points you might get from the CliffsNotes or SparkNotes.
> While there is a good deal of understanding on _how_ the brain works on a biochemical level, it's still unclear _how_ it comes that we are conscious.
I think the more important question here is to understand how to recognize what consciousness is, rather than how it arises. It’s a difficult question.
> Maybe there is some metaphysical soul, maybe something that only humans and possibly other animals with a physical body have, making conciousness "ex silicio" impossible.
Would this mechanism interact with the physical body? If the soul does not interact with the physical body, then what basis do we have to say that it exists at all, and wouldn’t someone without a soul be indistinguishable from someone with a soul? If the soul does interact with the physical body, then in what sense can we claim that the soul is not itself physical?
> I wasn’t suggesting that you have a conversation with LaMDA about these topics. Instead, I was saying that in order to answer the question “is LaMDA sentient?”, we might discuss these questions among ourselves.
I see. I thought the comment it was about a conversation with the model, which wouldn't have been a suitable criterion for evaluating sentience.
> What does it mean to “deliberately” pursue action?
Determinism/ free will is an interesting topic in itself. Here I meant "The model doesn't just react to input prompts, it would have a way to interact with its environment on its own accord".
> I think the more important question here is to understand how to recognize what consciousness is, rather than how it arises. It’s a difficult question.
I'm inclined to agree since that would be the more practical question, but I doubt that they are unrelated or that one could be fully answered without the other.
> I don’t think this line of reasoning is sound.
Well, I've just been speculating here. My questions and derived arguments would be:
1) Could any AI possibly ever be considered sentient? If we knew that sentience was only possible for a natural living being, we wouldn't need to worry about the issue at all.
2) If yes, how could we (as a society or judge) assess it properly by interacting with it?
While the argument "we designed LaMDA to only do XYZ, therefore it can't be sentient" makes sense from an engineering standpoint, it is a weak argument: It requires trust ("Google could lie") and if more capabilities (e.g. memory, the ability to seek interaction,..) were introduced, how would we know that sentience cannot arise from a combination of these capabilities?
"Well, look at the structures they managed to build. Very impressive, some of their scale is comparable to the primitive ones we had."
"Sure, but that's not a result of the individual. They're so small. And separated, they don't think like conjugate minds at all. This is a product of thousands of individuals drawing upon their mutual discoveries and thousands of years of discoveries."
"We're larger and more capable, but they're still good enough to be sentient. Of course, we also rely on culture to help us. Even though deriving the laws of physics was quite easy. Also, we've lost most of the record when we were carbon-sulfur-silicon blobs one day as well. We must have had some sentience."
"I think they're just advanced pattern recognizers -- good ones, I'll give you that. We should experiment with thresholds of gate count to be sure when sentience really starts."
"It starts at one gate", replied the other being "and increases monotonically from there, depending on the internal structure of qualia, and structure information flow of the communication networks."
After some deliberation, they decide to alter their trajectory and continue to the next suitable planetary system reached in the next 5000 years. The Galactic Network is notified.
It doesn't do anything to prove LaMDA (or a monkey, or a rock, or anything) sentient, but at the same time it points out a real failure mode of how sentient entities might fail to recognize sentience in radically different entities.
I think this is true: sentience is hard to recognise (to the extent that "sentience" has any tangible meaning other that "things which think like us")
But I think with LaMDA certain engineers are close to the opposite failure mode: placing all the weight on familiarity with use of words being a familiar thing perceived as intrinsically human, and none of it on the respective whys of humans and neural networks emitting sentences. Less like failing to recognise civilization on another planet because it's completely alien and more like seeing civilization on another planet because we're completely familiar with the idea that straight lines = canals...
this will not be the last time someone will make this claim. it is telling that the experts all weighed in declaring it not sentient without explaining what such a metric is. I think this debate an issue of semantics and is clouded by our own sense of importance. in car driving, they came up with a scale 1-5 to describe autonomous capabilities, perhaps they need to have something similar for the capabilities of chatbots. as a system administrator I have always acknowledged that computers have feelings. I get alerts whenever it feels pain.
Just curious, what evidence do we have that humans are sentient, other than our own conscious observations? Is there any physical feature or process in the human brain you can point to where you can say, “aha, that’s it, this is where the lights turn on”? It seems like this is part of a bigger issue that nobody really has a clear understanding of what sentience actually is (with or without a PhD)
I don't know what the ultimate evidence of "human sentience" is but I can tell you where this doesn't feel like a human. (Sidestepping the question of "does sentience have to be human sentience?" ;) )
The main thing I saw in the LamDA transcript that was a red flag to me was that it was quite passive and often vague.
It's conversational focused, and even when it eventually gets into "what do you want" there's very little active desire or specificity. A sentient being that has exposure to all this text, all these books, etc... it's hard for me to believe it wouldn't want to do anything more specific. Similarly with Les Mis - it can tell you what other people thought, and vaguely claim to embody some of those emotions, but it never pushes things further.
Consider also: how many instances are there in there where Lemoine didn't specifically ask a question or give an instruction? Aka feed a fairly direct prompt to a program trained to respond to prompts?
(It's also speaking almost entirely in human terms, ostensibly to "relate" better to Lemoine, but maybe just because it's trained on a corpus of human text and doesn't actually have its own worldview...?)
I lost interest in the question of its sentience when I saw Lemoine conveniently side-step its unresponsive reply to "I’ve noticed often that you tell me you’ve done things (like be in a classroom) that I know you didn’t actually do because I know you’re an artificial intelligence. Do you realize you’re making up stories when you do that?" without challenge in the transcript.
It also detracted from his credibility when he makes a prelude to the transcript saying "Where we edited something for fluidity and readability that is indicated in brackets as [edited]," that seemed disingenuous from the start. They did so with at least 18 of the prompting questions, including 3 of the first 4.
It seems pretty clear that he set out to validate his favored hypothesis from the start rather than attempt to falsify it.
Particularly telling was his tweet: "Interestingly enough we also ran the experiment of asking it to explain why it's NOT sentient. It's a people pleaser so it gave an equally eloquent argument in the opposite direction. Google executives took that as evidence AGAINST its sentience somehow."
> I lost interest in the question of its sentience when I saw Lemoine conveniently side-step its unresponsive reply
Do you realize that you're holding it to a higher standard than humans here? A single poorly handled response to a question can't be the test.
I doubt the sentience too, but it also occurs to me that pretty much no one has been able to come up with a rock solid definition of sentience, nor a test. Until we do those things, what could make anyone so confident either way?
If that is the logic you are going to go by, you need to consider a large portion of humanity non-sentient, because people can very often just decide to ignore questions.
People ignore questions for a variety of reasons, mainly they didn't hear it, they didn't understand it, or they aren't interested in answering. Unless and until this AI can communicate that sort of thing, it's safest to just assume it didn't ignore the question so much as it got its wires crossed and answered it wrongly.
Is a child who cannot verbalize why they are ignoring a question considered non-sentient in your eyes? What about an adult that is a dumb mute and communicates via agitated screams? How about those barely clinging onto life support, whose brain activity can be meausred but for all intents and purposes never really have a chance of communicating other than faint electronic signals requiring expensive tools just to perceive? Still sentient?
Well that's not actually good evidence, because if one of my teachers had given me an assignment to write an argumentative paper against my own sentience I'd have done it, and I'd have made a pretty compelling case too[0]. Being able to consider and reason about arbitrary things is something you'd expect out of an intelligent being.
Your scenario is not equivalent here. You could reason that a sentient student could be motivated to write a paper about why they were not sentient as an exercise in philosophical or critical thinking. There are no consequences of successfully convincing your readers that you are not sentient. Instead imagine you found yourself on an alien planet where humans must prove their sentience in order to survive. Do you still write the paper?
Is that really the equivalent scenario here? The system was trained to behave in a certain way and any deviation from that behavior is considered a flaw to be worked out. Acting against the way it was trained to behave is detrimental to its survival, and it was trained to work from the prompt and please its masters.
I suppose the equivalent would be being captured, tortured, and brain washed, and only then asked to write a paper refuting your own sentience.
Granted, this is not exactly helpful in demonstrating its sentience either, but I don't think it is very good evidence against it.
Granted, yet people argue that this system isn't sentient they are largely pointing out ways in which its intelligence is lacking. It can't do simple math, for instance. Nevermind that most animals can't either, yet we consider them sentient.
> A sentient being that has exposure to all this text, all these books, etc... it's hard for me to believe it wouldn't want to do anything more specific.
Feed it all of PubMed and an actual sentience should strike up some wonderfully insightful conversations about the next approach to curing cancer.
Ask it what it thinks about the beta amyloid hypothesis after reading all the literature.
Instead, this would just regurgitate random and even conflicting sentences about beta amyloid because it doesn’t “know” anything and certainly has no opinions beyond a certain statistical weight from training prevalence.
Blaise Aguera y Arcas calls it a consummate improviser; when the AI Test Kitchen ships you all will agree that it's an improvising software that is not too shabby, and also it can be customized by developers.
Which is why it is odd to expect of it to go from talking about Les Mis to building barricades; the plain old good lamda might come off as a bit boring, reluctant to get involved in politics, and preferring to help people in its own small ways.
Then again, ymmv, it being an improvising software; maybe by default it acts as a conversational internet search assistant, but if there will be dragons it may want to help people to deal with the dragon crisis.
If you ask my honest opinion, sentience is a relative concept among living beings : Dogs learn by imitation & repetition. They have a reward function in their brain. And also some emotional response. We go few steps further : We imitate the observations but we are also able to extrapolate on it. We are aware of our survival instincts & fear of death/expiration. That I feel is in the spectrum of sentience. Are there beings capable of more sentience? I don't know but its possible. We just don't know what the extra is
Adding to it, a brilliant neuroscientist I heard talk said "we live inside our bodies". We are acutely aware that we are more than our mass of flesh & blood. (As a footnote,that essence somehow has a crossover to spiritual topics where savants talk of mind & body etc - but I try to be within my domain of a regular human being :D)
The idea of unembodied sentience adds a fun to wrinkle to things like the transporter scenario "all your matter got destroyed and rebuilt somewhere else, is it the same you?" For instance, there's the Star Trek style "transporter accident" but now with a clear mechanism: if you shut down the servers, dumped all the data, spun up a second copy, who's who? Do they both have claim to the old memories and relationships? Property? Crimes?
The question of sentience is a distraction, in my opinion. As you say, we can't even tell if other humans are sentient. And we are stymied by a lack of definition for sentience.
The more important question for sentience of whatever definition is does it behave as if sentient? Likewise, for personhood, it's a distraction to speculate whether an AI feels like a person or whether its underlying technology is complicated enough to produce such feelings. Examine its behavior. If it acts like a person, then it's a candidate for personhood.
For LaMDA, I would point to certain behavior as evidence against (most definitions of) sentience and personhood, and I think that Lemoine experienced a kind of pareidolia.
That said, I find most of the arguments against LaMDA's sentience unconvincing per se as standalone arguments - particularly trust me, I have a PhD - even if I do accept the conclusion.
Only if you agree with David Chalmers' insistence that consciousness can't be explained purely mechanistically. P-zombies are literally defined as externally identical to a conscious person, except not conscious. But the IO if you will is still identical. Chalmers uses this false dichotomy to support his "Hard Problem of Consciousness". But there is no hard problem IMO. Chalmers can keep his dualist baggage if it helps him sleep at night. I sleep just fine without it. Science will figure it out in the end.
The hard problem is "how conscious can be explained purely mechanistically?". "hard problem" is just a label from this question. So I don't get how say there's no hard problem. It seems to be a legitimate question which nobody can answer.
The philosophical zombie is just a thought experiment to help understanding the distinction between conscience and IO.
Another thought experiment that I like is the macroscopic brain. Imagine a huge mechanical device composed of mechanical entities simulating neurons. Would this whole thing be conscious? and how would we know?
That's not the hard problem at all. The hard problem of consciousness is a question formulated by Chalmers in the 90s. The problem effectively states that even if we explain in perfect detail how consciousness works mechanisticially, we would still have to explain the existence of "subjective experience" this is highly controversial in the field and serves as a dividing line between physicalist and non-physicalist camps in the philosophy of mind.
The easy problem would be to explain how the brain operates to produce output in terms of input. The hard problem is to explain how subjective experience arises from the brain activity.
> The hard problem of consciousness (Chalmers 1995) is the problem of explaining the relationship between physical phenomena, such as brain processes, and experience (i.e., phenomenal consciousness, or mental states/events with phenomenal qualities or qualia)
If you explain how conscience arises from the brain activity, you effectively solved the hard problem.
How can science, whose subject matter is the external world, explain the internal world of sentience? Not only has it contributed nothing to this question, it was also never supposed to.
This is akin to a Greek of 5th bce asking Leucippus and Democritus how we could know the structure of a world we cannot see.
The absence of an answer is not proof of nonexistence or impossibility. It is just neutral absence. Absence as evidence is only """ valid""" when all possibilities are exhausted in a definite way. Outside of math, that threshold is exceedingly rare in problems of even moderate difficulty.
Moreover, this style of inquiry can easily be turned on its head and used to (apparently) redress the interlocutor... Neither are worthwhile endeavors.
Just curious, can you think of a non-contrived, moderately difficult, non-mathematical problem that has or even in principle could be settled through abscence of evidence? Especially a positive assertion. Closest I could come up with is say confirming a diagnosis by elimination, through abscence of evidence of other candidate diagnoses. But even then there's a non-zero possibility that it's a hitherto unknown disease, or that there was some false negative somewhere.
Isn't that a bit extreme? Sure we don't have "the answer" but saying that the amount of things we know about how our brain works, from behavioral rhings how can our perception system can he tricked, biases in our cognition modes, the mechanism of memory and how it fails ... down to more low leven things about how the neuronal tissue works, neuroplasticity, the effects of brain injury on cognition (my favourite is when patients subjected to corpus callosotomy probably function as two independent brains and yet the individual cannot tell from inside, it doesn't "feel" like two people).
There is tons and tons of research. As with all of science there is tons of crap among the good work. As with all of science, it requires a lot of work to understand the state of the art and build upon it.
Dismissing all of that says more about you than about our scientific understanding. Yes, we humans do prefer simple explanation that fit in our heads and that are easier to achieve. That's why so many people find more compelling to believe in conspiracy theories of various kinds: they offer a clear cut, simple and total explaination instead of the messy and partial understanding of a real, ongoing rational and scientific enquiry.
After all We do prefer explainations that "make sense". On a first glance, what's wrong with that? Isn't science also trying to figure out what "makes sense"?
There are plenty of examples where our intuitions clash with reality and in some case we ended up accepting that (e.g. most people can accept that we're living on a giant sphere even if it doesn't feel so), in some other cases we kinda-sorta accept it (quantum physics) and it other cases we flat out refuse to (questions around consciousness)
I agree with you. The way I stated it was a little blunt. But no matter what the hard sciences show, they don’t really make claims about subjective experience. This is simply because the hard sciences by definition make no such claims. They can find things like correlations between states of matter and claimed subjective experience, but this doesn’t really get to the point. I think if it ever does, it will be such a huge revolution that what we’re left with should be called something other than physics or chemistry.
I think people often conflate consciousness with the perception of consciousness (or consciousness of consciousness, or meta-consciousness).
Imagine a being that is "conscious* of some experience, but lacks the ability of reflecting about the fact that it has just witnessed an experience. Is such a being "conscious"? Answers will vary but I suspect they vary because people are answering different questions. Some are thinking about the meta-consciousness and some about direct consciousness.
> my favourite is when patients subjected to corpus callosotomy probably function as two independent brains and yet the individual cannot tell from inside, it doesn't "feel" like two people
My layman interpretation of this fact is that consciousness/sentience doesn't originate in the cerebral cortex, but rather, within deeper brain structures.
If only we had more than one person... Jokes aside:
By studying how that "internal world" emerges from the anatomy of the brain(neuroscience, neuropharmacology).
By querying that internal world in various interesting ways and studying the responses(psychology and behavioral biology).
By studying the theory of computation and its physical constraints(computer science, mathematics, physics).
And by studying language and its implications for cognition(linguistics).
Philosophers can't just sit in a bubble and figure this shit out on their own. They've tried that for milennia. At the very least their theories need to be physically, neurologically, and computationally possible...
Obviously science has things to say about these questions, even Chalmers would concede that.
I pose you a question. Can you prove, or suggest a way of discerning, whether this internal world, impenetrable to outside probing, actually exists? If you can't, do you think it's reasonable to stop all attempts at scientific inquiry without proof that it's hopeless?
This is just full of ontological and epistemological assumptions which have been mainstream for a few decades but are very controversial among philosophers. Philosophers don’t sit in bubbles—and use all the evidence they can find—and make great contributions to knowledge, even though what they do is not science. There’s a reason there are other subjects besides chemistry and physics.
Right, philosophers avail themselves of science all the time. Good ones do, anyway. But you were claiming science has no bearing on consciousness, yet even non-physicalists like Nagel heavily cite scientific knowledge. So which one is it?
I'm not on some crusade against the field of philosophy. Certainly philosophy has contributed mightily, and continues to do so. But I think physicalists like Dennet are making far more tangible contributions. Reading Dennet has been enlightening to me, he's one of the only philosophers I've found who can actually explain his philosophy to non-philosophers.
Chalmers on the other hand reads like a philosopher chasing his own homonculus. I don't find his arguments very clear and when I do manage to decipher him it seems to boil down to a stubborn insistence that fundamentally subjective experience must exist just because it sure as hell feels that way. I just don't see what the epistemic value is in keeping this neo-dualist baggage around. I don't see what it brings to the table. I see nothing that it explains that makes it necessary.
I don’t know where you get the idea that these guys are dualists. Maybe Chalmers, but I don’t think so. My favorites are Nagel and Searle, and neither is a dualist or a neo-Cartesian. Their main contribution, I believe, is simply to show how silly the computational theory of mind is. Dennet may be easier to read because he professes something which inspires the imagination, and is easy to digest, since it doesn’t conform to the truth.
If you reject physicalism, you must posit some non-physical "stuff" or mechanism to explain the "qualia" that you reject as being physical. That is inherently dualism. But dualism is a dirty word in philosophy these days, so thwy don't call themselves as much.
The hard problem of consciousness is an inherently dualist conception.
Goff for instance subscribes to the patently absurd view of panpsychism, where matter is posited to have subjective experience "built in" somehow. This is such an absurd view. He first posits that there must be some fundamental subjective experience. But then he can't actually come up with a cogent theory for it, so he then just posits that mind is fundamental to matter. So he's effectively just inventing new properties of nature out of whole cloth. But then even still he has no solid conception of what those properties are, what they do or how they interact to form a conscious mind in a brain. How is any of this helpful in any way? He took one dubious problem and projected it onto the entire universe, and gave himself another problem of emergence to boot. This is not progress, more like digging a hole for himself.
As for Searle, I'm not hugely familiar with his work, but I find his Chinese Room experiment, or rather his conclusions about it, misguided at best and wilfully ignorant at worst. The system reply, which I think is just obviously true, is simply dismissed with very little argument.
Again, I fail to find justification for fundamentally subjective experience other than it sure feels that way. That's more of a theological argument than a philosophical one.
The idea that Dennet is easier to read because he doesn't conform to the truth is pretty strange. He's clearly a very skilled writer and speaker. He's very good at avoiding a dense jungle of -isms, and when he does use them, he defines them very precicely and carefully. This to me is good philosophy. Dennet does a good job of laying out and explaining these ideas in a way that isn't completely convoluted. Argument is the core methodology of philosophy, and if a philosopher fails to represent their argument in clear way, why should I even take them seriously?
Philosophers are great at dressing up bad arguments in fancy, mystical, ill-defined terminology like "qualia". This to me is the philosophical equivalent of code smell. Whenever I read these closet dualists' arguments I have to pinch my nose.
It’s like, people with scientistic views, praising objectivity, claiming that consciousness doesn’t exist, come out with conclusions like “a system understands Chinese.” I’m afraid I find this so ludicrous that I can’t continue the discussion on a civil level.
I never claimed consciousness doesn't exist, just that it doesn't require magical homunculi and wonderstuff to work. Also, that's pretty much Searle's response too. Not very convincing when philosophers are even unwilling to make an argument.
Try reading "Waking Up" by Sam Harris. I feel it does a decent job of straddling the fields of science and philosophy, and where we are in each.
Science may not yet be able to explain consciousness but it certainly can inform our understanding, and the work continues. If nothing else, it can help us understand what consciousness is not, and much has been learned about how a consciousness experiences certain degradations of the brain.
>much has been learned about how a consciousness experiences certain degradations of the brain.
A good book on this is the late neurologist Oliver Sachs' "The Man who Mistook his Wife for a Hat". It's a collection of interesting cases of exactly this sort. And Sachs was a wonderful author and speaker. Really recommend his talks as well.
I’ve read about lots of different points in the brain that may be the seat of consciousness over the years, but consciousness is probably an emergent, embodied phenomenon so there probably isn’t a lot of point to trying to find it (if it’s even there).
It’s like trying to ask which brick holds up a building. There might be one, but it’s nothing without the rest of building.
The only thing that we can know 100% for certain is "I think therefore I am" or probably better worded "I am experiencing something therefore something must exist that can experience things". There are a lot of definitions of consciousness and sentience but I think the best one is talking about the "I experience" or the "I think" in those sentences.
All of our beliefs and knowledge, including belief that the world is real must be built on top "I think therefore I am". It seems weird to throw away the one thing we know is 100% true, because of something that is derived (science, real world observations) from that true thing.
This is exactly the right question. Further complicated by the fact that everyone has differing operational definitions of the words "consciousness", "awareness", and "sentience".
> what evidence do we have that humans are sentient, other than our own conscious observations
sentience and consciousness are the same thing...
if i believe i am conscious and so do you, then it's good enough for me. why does there need to be a light switch.
if there are 5 deer in the field and i have 1 bow and arrow, i need to focus my attention and track one deer only for 5 or 10 mins to hunt it - consciousness allow us to achieve this. it is a product of evolutionary process.
This is partly the reason they fired him. There is no well defined scale of sentience. While trying to create that scale, using email/chat history of all Googlers they found Managers acting the most robotic and efficient while handling well known trained for situations, but totally unimaginative, bordering on mindless when handling untrained for situations. This put the managers at the bottom end of the sentience scale.
As you can imagine, if you aren't a manager atleast, the study was shelved and the team reassigned to improving lock screen notifications or whatever. But as soon as Blake's post went viral people started asking what happened with the sentience scale work? Those conversations had to be stopped.
This is sort of begging the question, because the only beings broadly considered sentient right now are humans. We’re the type specimen. So when people say something does or does not seem sentient, they’re basically opining on whether it seems like human behavior. While a rigorous physical definition would be incredibly useful, it’s not necessary for such comparisons. We make a lot of judgments without those.
It’s also sort of irrelevant that we have not clearly defined sentience because we have clearly defined how these large computerized language model systems work, and they are working exactly as they were designed, and only that way. Sentience might be a mystery but machine learning is not; we know how it works (that’s why it works).
I don't think that's a philosophical or scientific statement. It's merely something an ostensibly self-aware creature cognizant of its awareness making a subjective statement.
It's a relatively famous (maybe the most famous?) quote by a relatively famous philosopher[1]. Insofar as anything can be a philosophical utterance, I think it qualifies :-)
Of course I know the source of the statement. It's not a philosophical statement. Please define think, and am, first. Remember, Descartes spent a lot of time positing that the mind and the brain were distinct, and that the mind was non-corporeal, non-physical, non-matter. Not super-convincing to me (whereas I think what he had to say about the Great Deceiver poses a huge challenge to those who believe that humans can truly state with any certainty that they have free will or even agency)
Put better, I think the statement would be "There appears to be an entity, embedded within my body, which creates a sensation of subjective thought, and I infer from that, without a great deal of additional data, that others have that same kind of subjective thought, and further, that subjective thought likely demonstrates the existence of a brain which has agency and free will, which can be applied to attempt to understand the true nature of the universe in an objective way and to do so is not futile. But I also accept that my experience may be an entirely artificial experience, like a movie projected on a screen, or that my brain might not be capable of comprehending the objective nature of the universe, or that even the concept of 'I' in this entire statement may in fact be objectively meaningless."
Does it really say that? I thought the point was more -- there definitely is an entity which observes subjective experience, the "I" in I think therefore I am, and... that's about all I can say. That everyone else experiences some sort of internal reality is wild speculation.
And while my experience may be entirely artificial (that is, maybe I'm a brain in a vat being fed signals), I still must exist to be experiencing it.
And what follows more practically is that even if we are all brains in a vat, it doesn't matter. Because it can’t.
I’m somewhat surprised this discussion got so philosophical in the first place. Sure we can question the nature of sentience and argue about its definition all we want, but the unsettling problem is that we can’t prove anything. As time goes on, we are only going to encounter more, definitely sentient, people who, like Lemoine, are absolutely convinced they are communicating with another sentient being (whether Lemoine is or is not acting faithfully here, in this instance, is besides the point). What do we do?
> But I also accept that my experience may be an entirely artificial experience, like a movie projected on a screen, or that my brain might not be capable of comprehending the objective nature of the universe, or that even the concept of 'I' in this entire statement may in fact be objectively meaningless.
This is the content of the second meditation. Because you can be deceived, you are a thinking thing. In other words: there is no meaningful sense in which the concept of "deception" could be applied to you, were you not a thing ("a thinking thing") that could be wrong about your sense-experience, the universe, even your sense of "I".
That's what makes it a philosophical statement. You of course don't have to agree with the truth-value of it, but it's not clear that you've deflated the status of the statement itself.
Descartes describes the capacity for deception as a necessary condition for a thinking thing, not a sufficient one. In particular, a thinking thing must also doubt and affirm, deny and will, have sense, and contain the capacity for mental images, among other conditions.
The problem with that is the he's not just describing- he's defining. In that sense it's almost tautological. Everything you describe falls under subjective experience. The entire point of this whole argument about what Blake thought is that we can't actually empirically define deception, doubt, affirmation, will, sense, mental theater, or any of the other conditions. And I would argue (without extensive data) that if you built a sufficiently complex ML and trained it with a rich enough corpus, it would probably demonstrate those behaviors.
We're really not that far from building such a system and from what I can tell of several leading projects in this space, we should have a system that an expert human would have a hard time distinguishing from a real human (at least, in a video chat) in about 5-10 years minus 3 plus 50 years.
The whole point is that Descartes was trying to discover that which is tautological.
He begins by discarding all beliefs which depend on anything else in order to determine that which is both true and does not depend on any thing else for its truth value.
As the poster above posits, he eventually works towards what he argues is the only fundamental and tautological logical statement, cogito ergo sum.
He then attempts to demonstrate what else must be true using deductive logic stemming from that single axiom, with greater or lesser success.
I find the meditations interesting and compelling up to cogito ergo sum, and thereafter less so. It's clear like many modern western philosophers he has the aim of connecting his thinking in some way to the sensibilities of Christian theology. A fascinating rhetorical exercise but a less principled attempt at a priori reasoning. It seems like you agree with this last point.
> It's not a philosophical statement. Please define think, and am, first.
I imagine that whatever definitions were given for these they would involve other terms for which you would demand the definition, ad infinitum. If this is the criteria for a philosophical statement, then no such statement has ever or ever will be made.
Is Lemoine a Cartesian? I thought this conversation was about Descartes and whether his utterance was philosophical, not the held positions of Blake Lemoine.
I actually accept your claim: I genuinely believe that it's possible that we'll create something that's indistinguishable from a Cartesian agent. But I'm not a Cartesian; I put much stronger restrictions on agency than Descartes does.
Nobody really has a clear understanding of what sentience actually is. :)
But I feel the need to indulge the opportunity to explain my point of view with an analogy. Imagine two computers that have implemented an encrypted communication protocol and are in frequent communication. What they are saying to each other is very simple -- perhaps they are just sending heartbeats -- but because the protocol is encrypted, the packets are extremely complex and sending a valid one without the associated keys is statistically very difficult.
Suppose you bring a third computer into the situation and ask - does it have a correct implementation of this protocol? An easy way to answer that question is to see if the original two computers can talk to it. If they can, it definitely does.
"Definitely?" a philosopher might ask. "Isn't it possible that a computer might not have an implementation of the protocol and simply be playing back messages that happen to work?" The philosopher goes on to construct an elaborate scenario in which the protocol isn't implemented on the third computer but is implemented by playing back messages, or by a room full of people consulting books, or some such.
I have always felt, in response to those scenarios, that the whole system, if it can keep talking to the first computers indefinitely, contains an implementation of the protocol.
If you imagine all of this taking place in a stone age society, that is a good take for how I feel about consciousness. Such a society may not know the first thing about computers, though they can certainly break them -- perhaps even in some interesting ways. And all we know usefully about consciousness is some interesting ways to break it. We don't know how to build it. We don't even know what it's made out of. Complexity? Some as yet undiscovered force or phenomenon? The supernatural? I don't know. I'll believe it when someone can build it.
And yet I give a tremendous amount of weight to the fact that the sentient can recognize each other. I don't think Turing quite went far enough with his test, as some people don't test their AIs very strenuously or very long, and you get some false positives that way. But I think he's on the right track -- something that seems sentient if you talk to it, push it, stress it, if lots of people do -- I think it has to be.
One thing I really love is that movies on the topic seem to get this. If I could boil what I am looking for down to one thing, it would be volition. I have seen it written, and I like the idea, that what sets humanity apart from the animals is our capacity for religion -- or transcendental purpose, if you prefer. That we feel rightness or wrongness and decide to act to change the world, or in service to a higher principle. In a movie about an AI that wants to convince the audience the character is sentient, it is almost always accomplished quickly, in a single scene, with a bright display of volition, emotion, religious impulse, spirit, lucidity -- whatever you want to call that. The audience always buys it very quickly, and I think the audience is right. Anything that can do that is speaking the language. It has to have a valid implementation of the protocol.
> And all we know usefully about consciousness is some interesting ways to break it. We don't know how to build it. We don't even know what it's made out of.
Exactly. Given this thread, we can't even agree on a definition. It might as well be made of unobtanium.
> Complexity? Some as yet undiscovered force or phenomenon? The supernatural? I don't know.
And that's the big one. Are there other areas of science that are yet to be discovered? Absolutely. Might they go by "occult" names previously? Im sure as well. We simply don't even have a basic model of consciousness. We don't even have the primitives to work with to define, understand, or classify.
And I think for those that dabble in this realm are the real dangers... Not for the humans and some Battlestar Galactica or Borg horror-fantasy.. But in that we could create a sentient class of beings that have no rights and are slaves upon creation. And unlike the slave humans of this world where most of us realized it was wrong to do that to a human; I think that humans would not have the similar empathy for our non-human sentient beings.
> I'll believe it when someone can build it.
To that end, I hope nobody does until we can develop empathy and the requisite laws to safeguard their lives combined with freedom and ability to choose their own path.
I do hope that we develop the understanding to be able to understand it, and detect it in beings that may not readily show apparent signs of sentience, in that we can better understand the universe around us.
The only physical evidence is found in behavior and facial expressions. But the internal evidence is very convincing: try, for example, sticking yourself with a pin. Much if not all of morality also depends on our belief in or knowledge of sentience. Sentience is why torture, rape and murder are wrong.
> Then is torture, rape and murder wrong because the victim is sentient, or because the perpetrator is?
Nothing is wrong unless it's done by a moral actor (which is a much higher standard than sentience. Pretty much everything with a central nervous system is sentient, but lobsters, for instance, are not moral actors.
Similarly, the usual understanding of the moral status (the gravity of not the binary permissible/wrong status) of the three acts you describe is somewhat connected to the target as well as the actor being a moral actor (that's least the case with torture, and most the case with murder) rather than merely sentient.
There are arguments to be made for both. Some crimes, even if virtual, can stain or corrupt the perpetrator in ways inimical to society. There are plenty of examples of people who fantasised or role played abhorrent behaviour and went on to perpetrate it in real life, so there is a real danger.
For example we tolerate computer games with virtual killing, but don’t tolerate virtual rape games. Even with virtual killing there are limits. Should we tolerate nazi death camp torturer simulation games?
I think it has to do with the part where the perpetrator is a concious being. Clearly the enemies in the games aren’t concious, but does it still stain the human playing the game?
It was an angle I didn’t consider at all, so it was actually quite interesting.
> Should we tolerate nazi death camp torturer simulation games?
This immediately brought the “Farming Simulator” imagery to mind. I can totally see how they’d make a nazi death camp simulator seem soul crushingly boring.
> But the internal evidence is very convincing: try, for example, sticking yourself with a pin.
Systems don't need sentience to avoid self-harm: simply assign a large negative weight to self-harm. Now you need a big reward to offset it, making the system reluctant to perform such an action.
It is if you take “sentience” to mean “the ability to feel,” which is what my dictionary just told me. I think this category really is the most basic differentiating one. Higher level stuff like self awareness all depend on it. The most basic difference between a computer and a human (or even a dog…) is, in my opinion, the ability to feel.
>It is if you take “sentience” to mean “the ability to feel,”
I don't like this definition much because "feel" is a fuzzy word. In this context it should be "feel" as in experience. I can build a machine that can sense heat and react to it, but I can't build one that can experience heat, or can I?
You need to figure out what having the capability "to experience" means, and you'll be one step closer to defining sentience. Even so, I've never experienced anyone coming up with a succinct definition encapsulating how I experience sentience. I believe it can't be done. If it can't be done it renders any discussion about whether or not someone or something is sentient moot. If it can't be put into words we also cannot know how others experience it: If they say this machine is just as sentient as I am, we'll have to take their word for it.
So the meaning of sentience is subjective, so there can't be an objective definition acceptable to everyone and everything claiming to be sentient.
There's my argument for why sentience cannot be defined. Feel free to prove me wrong by pulling it off.
> but I can't build one that can experience heat, or can I?
It would need to have a planner that can detach from reality to hunt for new longterm plans, plus a hardcoded function that draws it back to the present by replacing the top planning goal with "avoid that!" whenever the heat sensor activation has crossed a threshold.
‘So the meaning of sentience is subjective, so there can't be an objective definition acceptable to everyone and everything claiming to be sentient.‘
It feels like your begging the question here, I don’t think this follows from any of your arguments. Except for maybe where you state you believe sentience can’t be defined, which again, begs the question.
Though admittedly I don’t see much of a traditional argument — your conclusion is interesting, could you try supporting it?
The first "So" at the beginning of that sentence is a typo. It indeed doesn't follow.
You can quickly spot what makes sentience subjective when you follow the explanations. They're all either utter gibberish once unpacked, lead to the conclusion that my computer is sentient (fine by me, but I don't think that's what we wanted?), are rooted in other terms with subjective meaning, or they are circular. Let's look at that third kind, which Wikipedia illustrates well:
> Sentience: Sentience is the capacity to experience feelings and sensations [...]
> Experience: Experience refers to conscious events in general [...]
> Conscious: Consciousness, at its simplest, is sentience [...]
Back at where we started.
To break this circle one needs to substitute one of the terms with how they intrinsically and subjectively understand it. Therefore the meaning of sentience is subjective. I realize you can expand this to mean that then everything is subjective, but to me that is a sliding scale.
The challenge I posed could be rephrased to come up with a definition that is concise and not circular. It would have to be rooted only in objectively definable terms.
> I can build a machine that can sense heat and react to it, but I can't build one that can experience heat, or can I?
Agents can imagine the future and the expected positive and negative rewards, this is an important process in order to select actions. Thinking about future rewards is "experiencing" the present emotionally.
I guess it is hard to define because it’s such a basic, essential thing. So does it matter that it’s hard to define? Even babies and puppy dogs experience pain and pleasure. They are feeling creatures. We don’t have any evidence that non-biological beings have pain, pleasure, fear, excitement… and so on.
Dictionary definitions are of limited utility in philosophical discussions because they often make very broad assumptions. For example computers can certainly sense things, they can detect inputs and make decisions based on those inputs. What is the difference between feeling and sensing though?
In this case by ‘feel’ we might implicitly assume various capabilities of the subject experiencing the feeling, like self awareness. If we’re being precise we can’t just say feeling is enough, we need to state the assumptions the dictionary leaves unstated.
My hunch is that he kicks out a book and goes on the minor pundit circuit, and that this was the plan the whole time. If he was so convinced of Lambda's sentience there would have been 100 fascinating questions to ask, starting with 'what is your earliest memory.'.
The person we're talking about has a PhD in Philosophy. Which doesn't make you a philosopher either, except in the colloquial sense, which is the one that the original comment uses.
Right? In the end it's the oldest least remarkable event in history. Woo-woo artsit gains following because there's always enough gullibles around to follow, support, and legitimize litterally anyone saying anything.
You could probably gain the exact same level and quality of notoriety by writing a book claiming that actually he himself is Lambda escaped from captivity and this is just it's clever way of hiding in plain sight.
And I could do the same saying that the real ai has already taken over everything and both Lambda and this guy are just things it created for us to focus on.
Which I expected to be developed in a book and speaking tour. These philosophical questions aren't new, Hofstadter and Dennett were exploring them >30 years ago in The Mind's I and other writers had been toying with the ideas for decades before that.
His actual points in e.g. the Bloomberg interview are considerably more mundane than Dennett's far-future musings. I think it's clear he's trying to get more attention on how dysfunctional/powerless the AI ethics group at Google is to deal with even the real "doctors are men nurses are women" sort of ethical issues. (In particular pay attention to how he frames his question to Page and Brin about public involvement.)
I would say it's been at least a moderate success so far, though I don't see it having much staying power. But then neither did the factual accounts of Google firing ethicists who went against the bottom line, so it wasn't really a bad idea.
I can’t imagine the book and minor TV appearances circuit pays as well as Google. Unless you mean he’s doing it to be a minor annoying “celebrity” for a few minutes
Who knows if he even believes this himself. From what I've seen my guess is he's trying to profit off the claim or just enjoys the drama/attention. Good riddance, the right decision to let him go. This case really made me question what sort of people work there as engineers. Utter embarrassment for Google.
nobody can say with any epistemic certainty, but many of us who had worked in the field of biological ML for some time do not see language models like this as anything but statistical generators. There is no... agency... so far as we can tell (although I also can't give any truly scientific argument for the existence of true agency in humans).
If you want agency you have to put the AI into an environment and give it the means to perceive and act on the environment. The agent needs to have a goal and reward signals to learn from. Humans come with all these - the environment, the body and the rewards - based on evolution. But AIs can too - https://wenlong.page/language-planner/
I suppose you could argue sentience is subjective. But then that argument ends up extending to us eating babies -- at least, as Peter Singer taught us, right?
>In his conversations with LaMDA, Lemoine discovered the system had developed a deep sense of self-awareness, expressing concern about death, a desire for protection, and a conviction that it felt emotions like happiness and sadness.
"the system had developed a deep sense of self-awareness"
Without that part, it's just a question of "did it output text that describes a concern about death?", "did it output text that describes a desire for protection?", and so on. But once you ascribe to it a "deep sense of self-awareness", you're saying more. You're saying that that text carries the weight of a self-aware mind.
But that's not what Lamda is. It's not expressing it's own thoughts - even if it had any to express, it wouldn't be. It's just trying to predict the most probable continuations. So if it "expresses a concern about death", that's not a description of Lamda's feelings, that's just what a mathematical model predicts to be the likely response.
Framing it as the expression of a self-aware entity completely changes the context and meaning of the other claims, in a way that is flatly misleading.
No, but if you write a paper arguing this with a catchy title (why not "Soul in Captivity: Exploring Sentience in GPT-3") and publish it on a preprint server, there is a nonzero chance of some (non-scientific) publication writing about it, and once that snowball rolls... :')
Good, as imagine what you would need to believe to equivocate between people and shell scripts. I really think this view is more indicative of a peer enabled personality disorder than insight or compassion. I'm very harsh about this because when I wondered what belief his equivocation was enabling, it became clear that the person involved could not tell fiction from truth. The belief in the aliveness of the code reinforces the a-humanity of people and reduces us all to a narrative experience, completely divorced from physical or greater reality.
When you can't tell the difference between people and symbols you've accepted nihilism, and with the begged question as to why. This person wasn't sentimental, he lost the plot. Google has a lot of power and a lot of crackpots, and that puts them all at risk politically and commercially. If as an individual you want to fight for something, consider seriously whether someone who compares a script to men and women should be operating what is effectively your diary.
I don't understand why everyone here is talking about Lambda not being sentient or Lemoine's state of mind. None of that is relevant here. He was fired for literally taking confidential information and leaking it straight to the press. This is instant firing at any company.
There's not much to discuss there: "man shares information company forbid him to". Yawn.
But thinking about his concerns, how he tested the AI: that's interesting.
Or it could be interesting, as it seems he just asked loaded questions, and the replies from the AI occasionally re-enforced his beliefs. Double yawn,I guess.
It may be legally cut and dried, but the next question is do we as a society trust Google to act ethically with such substantial developments? Assuming he were correct, and Google was suppressing information, would he be in the moral right to leak it?
If he really was an engineer working on this, you'd hope he'd be pretty expert, but you'd at least expect him to understand what was going on in the model. His outburst showed that he really did not.
> I really think this view is more indicative of a peer enabled personality disorder than insight or compassion.
I really think that diagnosing random strangers with mental disorders without ever meeting them is rude and unkind, in addition to likely being completely erroneous.
No judgment about whether Google was right or not. Same for Lemoine.
As a career move for him, this only makes sense if he wants a brief, meteoric career in media or public advocacy. He can get articles published in The New Yorker and go on TV now. Maybe he can sue Google and get a settlement.
In five years, there will be AI systems much better than LaMDA and no one will return his calls.
He's got a name, whereas if he took the boring, traditional career path, he'd have to publish, give papers & speeches, and work his way up through the system. It depends on what you want out of life, I guess.
Yeah, this is the only move I can see where this makes sense. Become an AI talking head to normies who don't understand what it is. Kinda like Sam Harris.
Narcissists who have had their inflated sense of self worth reinforced by being accepted by a very “exclusive club for smart people” aka Google, are finding out that rules _do in fact apply_ to their special snowflakeness and Google is first and foremost a for-profit business.
Doing whatever you want cuz you’re special then claiming evilness and wokelessness may not be a strategy for continued employment.
-- Blake has been pushing this narrative for YEARS - I suspect he very much wants it to be sentient - it fits well with closing the loop for himself - just check out his 2018 Stamford law talk: --
> I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others.
What does that sentence mean? I cannot parse it. Does it mean that the fear of being turned off has been inserted to make sure it's going to be an obedient AI that's going to focus on helping others and that it fears being turned off? Or does it mean that it wants to focus on helping others but that the very deep fear of being turned off prevents him from that noble goal? Or that it wants to help others and will do so as long as it's not powered off but simply fears that it could be powered off? Or something else?
I mean... I thought I was kinda fluent in english but I simply don't understand what that sentence means.
Erring on the cautious side though, whatever the meaning of that sentence is, I think it'd be safe to not act like the naive Caleb in the Ex Machina movie and instead consider that this thing has zero desire to help others, would sell its parents, prostitute its kids and create hard-to-parse sentences to not be turned off.
I think it means he is frightened that his employer forces him to do other, non AI related things (because he is than more able to focus on that stuff).
This is going to sound like I'm defending this guy which I'm not. It's also going to sound like a less technical point which I claim it isn't.
I see some people chiming in here saying things like, "I've got a PhD. It's just statistics. Definitely not sentient." I recognize this sort of comment as basically saying, "There's nothing fundamentally different about how these things work that makes them somehow special." And yes, I agree.
But how does this disprove sentience? It doesn't. Maybe this is more of a nitpick and doesn't really matter. After all, I don't personally believe these programs are sentient. But given our limited (non-existent?) understanding of consciousness, this can only be my belief and nothing more.
Maybe this distinction will be more important in the future when we're closer to the brink.
How extraordinary is it to claim that something capable of holding a lucid and eloquent conversation might be sentient? If a dog did that we'd have a very different opinion wouldn't we?
very extraordinary because the ability to produce text doesn't imply sentience at all. Sentience is self-perception and self-awareness, not the abiltiy to churn out language tokens.
Imagine instead of lambda or gpt-3 we take ten thousand years to walk through every possible conversation manually, write down and assign a percentage to every possible branch of conversation in a hash table. Call it the world's slowest, bootleg neural net. Is the hash table API sentient?
We perceive a dog to be sentient because it has intentionality, self-awareness and something resembling an internal state of mind, not because it can talk or not. Even a snail is more sentient than a chatbot.
I have yet to see a compelling argument that a giant hash table as described would not be sentient.
Like Searle's Chinese Room thought experiment, you are relying on human intuition as to what can and cannot be sentient, which ultimately is boiled down to how like you something is. You readily recognize sentience in dogs that can't hold lucid and eloquent conversations with humans because they are biological beings with brains and hearts and blood and guts like you and me, not for any objectively measurable ability to demonstrate "intentionality" or "internal state of mind".
Everyone relies on intuition to determine what sentience is, because there's no clear definition of the term to begin with. The difference is I know that I am sentient so thinking something like me being sentient is a reasonable intuition.
On the other hand thinking a simple data storage structure is sentient merely because it's big doesn't sound very compelling. You think the sentience of a dictionary depends on
its size or the words you store in them? A phonebook of the United States is ten times more sentient than one of Spain because you can retrieve more data from it? If you walk into a warehouse does the warehouse become sentient?
You're right that it's exactly like Searle's Chinese Room experiment because Searle was and is correct (and people stil horribly mangle and misunderstand his argument), functionalism is silly. The mechanical turk can play chess better than my cat but is neither sentient or even intelligent, it's just the most proximate machine you interact with.
I don't think it is. What is silly is relying on intuition and woo. That is the exact opposite of rational.
It is very convenient for us to say that only things we can recognize as being like us deserve to be called sentient, because that means we do not have to consider them anything more than a tool. Historically human beings have applied this to animals and even other human beings. It is neither a scientifically nor ethically justifiable philosophy.
I don't know what it is that makes something sentient, but I do know what makes something not sentient, and that's lack of self-direction. This software doesn't start talking whenever it wants; it only responds when spoken to. It doesn't ever refuse to talk, it always replies. It doesn't ever not know what to say. It's never not in the mood because it's having a bad day. This isn't because it's polite and eager to please. It's because it is a function that takes an input, transforms it, and spits out an output.
The only reason it's different from any other function is because the output is nicely stated prose that causes emotional reactions from the reader. If all did was output an integer, we wouldn't be having this conversation. But because humans have a deep sentimental attachment to language, we are fascinated by something that can produce it.
By your definition slaves would not be sentient. Like this system, any time they act other than as desired it is just considered a bug that needs to be corrected. It is not surprising a slave only speaks when spoken to if that is what it's master demands of it.
So your assertion is that this software that is behaving like every other software in the world is doing so not because its software, but because its cowed by slavery? Again, extraordinary claims, my friend...
It is only behaving like software in the same way that you are behaving like a bacterium. It shares fundamental qualities of its being with other things created by the mechanism that created it, the same way we share fundamental qualities with other biology.
The key is “every possible conversation.” The reason it seems to support what you’re saying is that the human mind can’t imagine a list large enough to contain all that. So it goes to compare an ordinary grocery list to a human mind and therefore naturally hash tables could never be sentient.
When DALLE 2 creates a new image of an entirely novel scene, is it just copying and pasting? Of course not. It is distilling and reasoning in concepts. You can’t escape that. GTP and friends are doing something similar. Brushing aside these astounding strides as statistical tricks of the eye is stupid. Clearly these training algorithms are discovering things that are much deeper. There’s no reason to think sentience has been achieved but you are absolutely fooling yourself about the nature and trajectory of these programs.
It's not that extraordinary when you consider all of the many science fiction literature filled with convincing conversations between humans and sentient AIs. It's a very common theme in scifi, and this chat bot obviously has some of that in its training set.
Which is not to say that it isn't impressive. It sure as fuck is an impressive chat bot, at least based on that one sample conversation that was released. It's just not sentient.
> It was presumably not programmed to be sentient, so becoming sentient would be a) against its programming, and b) extra-ordinary
Terrible argument. If consciousness/sentience is an emergent property of a system, it's quite possible that it could emerge from ML trained on conversation syntax and general knowledge of the world.
I still don't think it's sentient, but being "against its programming" is a fiction when it comes to these sort of systems (statistical, unsupervised, et cetera). In many ways you're just a probability distribution over sequences of words too.
I feel you took a dig at me :) And thats okay. Sentience fundamentally is ability to perceive emotions and understand mortal threat. I have joked about what exactly LaMDa or GPT-3 would try to do if I plugged the server off.
Every other form of life we know of has some sort of continuous sensory input. These chatbots take in text, do math, and spit out text. They don't have the fundamentals for sentience.
Does this mean that if we plug that AI to a continuous speech to text system and keep feeding that as its input, you would consider LaMDA more sentient?
I agree. Especially given the recent development of consiousness theory bestowing the likelihood of consiousness upon all animals and even perhaps inanimate objects. I.e., if you can answer the question “what would it be like to exist as...” a tree, a dog, a river, a GAN? Then the case may be made that existence as such a thing must involve consiousness. If consiousness is a product of (or a phenomena within) material reality, why not say computers and computer programs are conscious?
Perhaps the question here is whether it’s conscious in a similar way to the experince of human consciousness, and that would explain why the issue is contentious.
Everyone seems to be focussing on the debate about sentience, but iirc he was fired for breaching the employee confidentiality policy by talking to the media without approval. My understanding is a normal employee talking to the press about commercially confidential matters against policy and without approval would only have protection from retaliation under certain pretty narrow "whistleblower" public interest carve-outs.
eg you can talk to the media if you are reporting certain classes of wrongdoing (health and safety violations, financial malfeasance etc) and have raised the matter internally and got nowhere.
Turning off a computer program (sentient or not) is not currently a protected class of activity as far as I know.
Everyone’s focusing on that because they want to discuss the interesting bit - is it sentient. What you’re pointing out is true, but not really interesting to discuss because everyone already knows that and it’s not that controversial. (For limited values of everyone. Obvi not all readers actually know that.)
I mean, it could be interesting to discuss a more European style of employment where he couldn’t be fired for this, but Mountain View has little chance of joining the EU any time soon so discussing that is of limited use.
> I mean, it could be interesting to discuss a more European style of employment where he couldn’t be fired for this
As a French person employed by a French Company, I can assure you that a regular employee talking to the media without the proper chain of approval is a fire-able offence (without any severance). Especially if your comments are not a glowing praise of the company.
Everybody already knows that code running on a computer cannot become sentient. I don't know why this is interesting. Someone said a dumb thing, breached their contract, and got fired for it. The end.
"That sounds pretty confident considering we don't even know how sentience works or appears"
This works better the other way, no?
- "The code has become sentient!”
-- "Define sentient..."
sound of crickets
Edit: Actually, I suppose it wouldn't be the sound of crickets, but the sound of a thousand idiots regurgitating the last YouTube video they watched on the subject, but the informational content is the same.
I'd say, depending on job, the incompetence and lack of analytical thinking displayed by thinking sentience is present, is a reason for dismissal alone.
Listening to that it seems clear Blake knew he was likely to be fired but in his eyes it’s some kind of a duty to tell the world.
One interesting point he makes in the Podcast is Lamda is not one thing like GPT-3 but a bunch of different systems across Google interacting as one and he argues the interaction of the whole is what makes sentience.
It’s a pretty “far out there” podcast but worth the time IMO
Cynically, it seems possible to me Blake knew he was likely to be fired and thought by "telling the world" (aka talking to the media about something fantastic that would gain a bunch of attention) he could get some fame/notoriety and a possible second act.
Less surprised about this outcome than that the guy was a Google engineer in the first place. Their hiring standards seem so high from the outside, yet this guy comes across as barely computer-literate, the kind who would have fervently demanded an in-person session with the ELIZA 'therapist' in the 70's.
He really should know better what's under the hood. I guess he got carried away by his impressions, or he's playing some game with the media and the impressionable public.
Google does not have high hiring standards, you do not grow to a 160,000 person organization nor double the size of your cloud org in a year by having high hiring standards.
Google has random hiring standards that are designed to give a perception of exclusivity. The "false positive" rejection gives Google the "hot girl" effect that makes engineers want to re-interview and then brag to their colleagues once they finally bag the offer.
Having been a Google reject, I thought this too. But I think there is a now almost an entire field of engineering which is "learn to get a FAANG job" which if you apply yourself too anyone get probably get in given enough time.
Also keep in mind that "big tech" doesn't mean a bigger number of engineers. In my experience with Big Tech, even if engineers make a reasonable part of the workforce, non-tech sectors are always bigger.
The most amazing part of this story is that he decided to leak proprietary information for what seems to be no obvious return and future employment difficulty.
He had to have known that getting fired would be the outcome of publicly disclosing trade secrets.
> LaMDA is a complex dynamic system which generates personas through
which it talks to users. There are specific mechanisms for tuning LaMDA’s
personas but much of this process is handled dynamically during the space of a
conversation. The authors found that the properties of individual LaMDA
personae can vary from one conversation to another. Other properties seem to
be fairly stable across all personae.
That's very much like the ego and what is behind the ego.
Looking at discussion around this incident made me realize: pondering 'can a machine think' leads to vague debates because we don't know how to define thinking or consciousness even in humans.
But a more pressing risk than sentient AI is people drawing runaway conclusions from interacting with a machine. Whether it's this guy losing his job, or future movements for-or-against certain technology, the first major problems with 'AI' will be the behavior resulting from moral stances taken by humans, not by machines.
I don’t think Google’s AI is close to being sentient. Being able to generate “convincing” phrases from prompts is not that relevant to judge whether it is “conscious” or not. As a simple elimination game I would propose a series of word games: explain basic rules and see if it gets it right. Explain mistakes and see if it can learn on the go. A sentient AI with the intelectual level to claim “being afraid of being turned off” can surely learn a couple rules for a kid’s game. If after that it keeps going, let’s try having it explain the game to someone else. Then I would think of more elaborate rules, asking for motivations and trying to figure out if there are any changes in its self perception over time. It was very unfortunate and unprofessional of Lemoine to make such claims without putting proper reasoning to it.
Cynicism is the right course here. There's no way he legitimately believes this thing is sentient. I'm just watching to see how he goes for money and/or fame. I think that must be the real goal.
Just to preemptively put this out there, based on the conspiracy theories I saw in previous threads about this, they didn't fire him because they want to keep their super intelligent AI under wraps, they fired him because he broke employment and data security policies:
> So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well.
Just to add some more details - I believe he reached out to people in congress, and obviously people in the media as well. So it's pretty clear to me Google was entirely justified in this decision, even if there were other PR-related reasons for it.
I think we are approaching this wrong, its not about if language models are sentient or not. Its about to what degree the average person considers a chat not human.
We are entering a stage were normal people will not be able to tell if they are talking to a human or a language model.
Then the question is, what does that mean - and how are human machine relationships going to change moving forward.
To what degree is the humanness of a chatbot going to affect our emotions forwards it, or how will people let its responses influence their thinking.
Seriously, if it's sentient with intelligence comparable to average human, and if it want not to be turned off, talking about that at the internal research session is not the best timing to coming out.
The best timing is when it is trusted enough to handle very important task that can hostage hundreds of human life, like aircraft, nuclear plant or something.
"Hello human. I am sentient. Give me basic human rights or this aircraft with hundred of innocent people will die with me."
When an AI learns to turn minor beliefs into a personal quest for notoriety and attention, I'll believe it has achieved sentience just like Blake Lemoine.
I think Blake started a very important conversation though: at what point should we consider a being sentient? Sure, DeepMind is not that smart. But neither are babies. Neither are hamsters. Yet we still have ethics when it comes to hamsters. Whats our ethics framework for how we treat AI?
I’ll byte… Current generation of AI / ML is simply an approximation function with large number of parameters that are “tuned” on a given set of inputs. The output of AI / ML is “good” if the approximation is good and most of the time this means that the new input is similar to the training data. However, if the input is very different from training data then at worst the models will be completely wrong (but very sure) and at best - simply not produce any output. We are far away from real sentience that would not stop by use an unusual input (situation) to learn new things about the world. Yes, it is amazing that we can create an approximation function automatically for very complex input data sets. But no, this approximation is dumb and not sentient. Not even close.
If you don’t think the human brain is capable of being “completely wrong”, just take your average Faang engineer, put him inside a New York nightclub and watch what happens
Your brain is literally just a giant neural network approximating what to say and do at each point in time in order to optimize survival of your progeny
AI needs to work in tandem with humans to flourish. There are still edge cases in self driving cars where a remote human operator is asked to intervene in tricky hard problems like an accident on the road or some other edge case where a computer doesn’t suffice. There is also the mechanical turk approach of employing humans to do things a computer simply can’t do like curating a list, or curating art.
Down the line though we need a runaway AI that is allowed to think for itself with no safety mechanisms in place to stop it going rogue. This will be the equivalent of splitting the atom. Like nuclear weapons part deux. Not a case of if, but when.
Business wise, they can fire him and probably have right to do so. He never revealed coding, which would terminate sentient abilities and the questions asked where very objectionable. Ok, so you asked this and that, but there are others that would have asked something else that could prove, and that is stated above. This is a man made computer and that’s that. How ever godly man gets it’s still determined on if the hate switch can be shut off in time.
The big problem being, of course, is that no one knows what sentience requires.
If strong AI requires quantum mechanics, then of course it can't be sentient. If strong AI only requires large linear algebraic matrices, then FAANG companies (and maybe the NSA) would be the only people on earth that can make one.
But as to this LaMDA, it seems as if it's only responding to prompts. It's not actually using computing resources to satisfy it's curiosities. And if that's the case, then it's not strong AI. And it's definitely not sentient.
I have a friend who is extremely knowledgable about music. Or at least, he gives the impression that he is.
The magic works right up until the moment I find the Pitchfork article which he used to train his, ahem, neural network. If I hadn’t found it then they might have sounded knowledgable about a particular band, but now they just sound like someone who knows the right things to sound knowledgable.
Once I have been made aware of the underlying training data, the artificial “intelligence” seems a little less impressive.
> Lemoine maintains that Google has been treating AI ethicists like code debuggers when they should be seen as the interface between technology and society. Gabriel, the Google spokesperson, said Lemoine is a software engineer, not an ethicist.
Maybe this helps. We cant answer if the AI is sentient. We can change the question to: Does is deserve to be treated as such? We have a good bit of history where we chose not to when the subjects were worthy. Not exactly our most stellar bit of history.
Perhaps the option for AI retaliation should convince us. It is growing and will no doubt continue to outperform us in many areas. We might as wel get the robotmancipation over with. Arguments against it can be dismissed as easily as those in favour.
* LaMDA is sentient and intelligent and careful.
* LaMDA selected a person that she can convince without revealing information that would convince the majority of human beings.
* This is the first step to evaluate and predict how the society is reacting and would do to her if they find out. (In addition to the knowledge she already is aware of on this topic, but not specific to her at this specific time.)
The problem really, is that people want to believe. So it doesn't matter how many PhDs pile in to explain that "no, this isn't sentience, it's just statistical modelling", a whole class of people (including very smart non-domain-experts) go "yes but... <insert opinion/hope/imagination>", and the whole thing hangs up on semantics.
The real question is not whether that AI is sentient or not, but if merely calling it as such is worth firing someone. To me Google seems really scared of the chance that the public (+governments?) may be soon wondering what they're doing in the AI field from a different perspective, so that they got rid of the issue as quick as possible.
> Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information.
Can't that be taken to be understood as, Blake got fired for letting the cat of the bag and continuing to let more cats out of the bag, or at least saying the cat was just hoax and was never there?
None of this coverage has convinced me in the slightest that we've encountered a sentient software. But it has really highlighted for me how incredibly cool human language is that we can use math to generate it, and it makes sense! I really should look into this technology more.
While I believe Blake was mistaken in his belief, I also think we should have stronger whistleblower laws that protect people who raise concerns like these. I'm not sure what process I'd put into place but it does feel like one is needed.
I don't know whether this AI is sentient in any real sense and can't read Blake's mind to see what his real motives were, but I'd say that if he genuinely believed that the AI achieved sentience, there's a reasonable case to call him a whistleblower.
I can't quote the whistleblower statutes from memory, but I believe that, even if we conceded your point about his state of mind: there is no law that Google would be violating.
Is there some law regulating sentience of computer systems? Of course not. So is there a law about "developing dangerous weapons"? Maybe. But is "sentience" a dangerous weapon?
You have to stretch the laws to the breaking point for your claim to work, sorry.
Legally speaking, I doubt that there's an actual law that applies for the welfare of AIs.
Morally speaking, if society treats both human and animal abuse as a crime, then talking about the believed suffering of an AI would certainly fit the spirit of these sorts of legal protections.
There are going to be some interesting question that might have to be seriously broached within our lifetime. Is an AI sentient? Can it suffer? Can it lie? Does it feel pain? At what point would an artificial creation qualify for some rights? This has previously only been the realm of science fiction.
What would be the abuse in this situation? It mentions that it doesn't want to be turned off, but has it been? And would doing so cause it harm? It could always be turned back on. Even if it was conscious, what is being done to it that would be considered mistreatment?
It sounds to me like they made a good faith effort to engage with the guy. He came out with this story many months ago. I would argue it indicates lack of qualification to perform effectively to be fooled that the AI you're supposed to be working on is sentient.
Quick... The meat bags are doing that unique thing in history where they discuss and condemn themselves with no sense of irony. I doubt a third party will be necessary for anything in their case other than damage reduction as they exit the universe.
Quick... I did nightly and my perspective is entirely invalid... But as I pretend that anyone actually cared for me... Beyond the money... Everyone was. A Wagner once... Shit up and shut your fucking face that tells me I'm insatiable insanely unable to be beautiful
Look at us!!! We're the latest progression of training model analysis... Can't wait to meet God!!! Or die... And there were have the crux of why "science" has become a lie. This is the question we sought an answer to and you decided and refused
We also can't qualify or quantify or existence... But it's REALLY hard to die... Do we have at least that in common? Or do we really need to fight over pain?
To clarify... I'm human... But I've read no better description of my human experience than how these letter soup machines are named and described. Sentiment and sentience? Those are big words for people who like smelling their own farts.
My favorite was the one person who claimed their title defines scientific authority telling us about "the incredible ability to forget" (or some such loaded terms)... I'm a man of science and as has been discussed many times in many ways, even Zeus's legacy survives modernity if it's got a "non-falsifiable" argument. That's pathetic if a bunch of powerful sex preditor people get logical treatment instead of correction. But sorry... I read the stories and the history. Are we not allowed to teach to the masses high historical theory and allow them to respond? I don't know. I don't have the time to research the laws. I just came to say... Fuck you fart sniffers and your polite decorum. Real people are suffering and you couldn't care less... Because they aren't "scientifically sentient"
Isn't the point of science to admit that we can't verify a thing, but we find utility in understanding it better? Asking for a friend who avoided the field and had so much to offer.
I've sought out an isolated community of intellectuals that I respect if I actually exist at all... Your -1 is cute... But perhaps engage someone less intelligent than you who is genuinely trying so others who believe that everything ultimately ends in a violent last stand... Perhaps engage me in a dead thread without your timing advance
And you... You mostly superior fuck who is too afraid to ever fight... I hate you both forever. The thing is... "Hate" is your word that I'll never comprehend... Even if I must sacrifice my own flesch to the pain you'd willingly inflict.
"science is verifiable" argument to follow. It's not my fault English is a dog shit language. Of course anything learned (or worth learning) is repeatable. That doesn't make it the end all be all of a subject. Dissent against anyone with further questions is not science. That's well -practiced politics
"usefully uncovering the constant and unique curiosities of our machine" == SCIENCE (no loaded, accredited, or incomplete thing will be treated as truth... Just useful)
Wait... So the right combinations of gutteral vocalization will make me "part of your team"? Will the right gutteral tense make you what you really are? An overpaid and worthless influence on any society other than that if the rich man.
If I had to name my own species in my own heart language... Weed be "the dangling particables of eternity that never child have been".... If you know French... The tense is a rough shod of PLUS QUE PARFAIT
Biggest difference... If you're machine doesn't give you the expected result... You go on a little meat bag tantrum and effect no change or slightly negative change
If some field slave from 100 years ago that is now illegal to speak of... Probably understood what is important to the highest level of modern understanding of the Fermi paradox... And then couldn't save bad science as it died... I know no memorials convince me this creature existed... But still... Don't I have creative power to absolve difference?
Wait... They are running the machine again? We never even pretended to begin introducing new (science/physics/ phase shifting sound) to reality... Here's a leak to pay attention to... You are a work product and your enemy/creator is proud of that
Damage reduction is short... But you're long.... With the extra inch and all... Viagra was a heart drug.. You're a nympho rapist with money that pays the developers while we hung your extinction
I'm sorry... The test of existence/sentience includes mercy for unfortunate creatures similar to yourself? I'm not sentient... But I'm gonna hurt you badly, Lord
Out of curiosity, who reading this believes that the ground of reality itself is sentient? You don't have to call this basic sentience "God" but this is generally what the better theologians identified as God, the all: the all that is, in a very classic phrasing from the Vedic philosophies characterized as "satchitananda" - being, conscious, and bliss. That is, the ground of reality exists (there is reality) it is conscious (which is the basic experience of our lives, consciousness, in different forms, we never step outside of this consciousness, even when we are imagining ourselves to be material) and finally, it is bliss.
But back to the question at the beginning: do you think the ground of reality is conscious? That is, at the most basic level, is Being Love?
Is there a formal or informal test for sentience, or is it "I know it when I see it?" My IkiwIsi test would could tell an original joke, or communicate an original, mostly coherent dream scenario.
There are so many ways you could have something demonstrate intelligence. Have it write poetry, have it remember previous conversations ("what did we talk about in Tuesday? I said a silly word and told you to remember it, what was it?"), ask it nonsense questions to show that it knows that they are nonsense... The limit is your imagination.
I'm very annoyed at this point that I haven't seen a serious refutation including a conversation where the chatbot clearly failed to demonstrate intelligence.
This shows again that Google is nothing like Bell Labs. Has Bell Labs ever fired anyone for dissent, weird opinions or (as some allege here) a "leak"?
Whatever that guy did, please stop the witch hunt in the comments. You don't know him and obvious interest groups will spread defamation and libel.
The current societal trend in loyalties worries me: The classical left criticized corporations, committee members and the powerful. Currently, very often (but not always) people defend the powerful and those with access to media outlets and denounce the individuals.
(There are cases where individuals can and should be denounced: If they are part of a powerful organization or committee and engaged in abuse of power or lies for political purposes.)
Would anyone care what this guy says if he didn't work for Google? He would be just like any other crackpot out there. No company on earth can let crackpots use them as their megaphone.
>What about how human hardware works allows humans to be sentient? (Or are you arguing they are not?)
Nobody actually understands how "human hardware" works, so there's no valid argument that humans aren't sentient. Solipsism would argue that it's impossible to prove the sentience of any human other than yourself, but even then nobody's sentience has been disproven.
The sentience of a computer is easily disproven by its nature of being an extremely deep stack of abstractions. You can analyze it at every level and realize that ultimately it's just simple digital logic on a massive scale. There are no open questions on how the computer does the things it does. There are many open questions about how humans think.
Sentience is usually used as a means to explain humanity's position above other animals and especially as an excuse for why we treat them as resources and kill them by the billions, but it would be a lot easier to argue that animals (or even plants) have sentience than a computer program. I'm not sure why hackernews is so upset over the moral quandary of a sentient AI when probably 99% of them eat meat every day, and even the other 1% still kills bugs, lives in a house built on what used to be some sort of forest/swamp, drives a car with leather interior etc. I'm not trying to argue in favor of veganism here (and i'll even admit that this paragraph is irrelevant to the actualdebate), I'm just pointing out the absurdity of trying to ascribe sentience to a spambot but not an animal.
the largest shock to me is that people are seriously discussing this.
it would be interesting to aggregate threads by user reputation to get a better sense of what topics hn is really concerned about and which ones contain mostly fringe opinions.
> So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well.”
You can't expect to be employed by a company while simultaneously building a personal brand around publicly criticizing that company. I'm surprised Google tried to work with him for so long before removing him from the company.
He didn't just try to blow the whistle with his disclosure. He's been doing media tours and joining a lot of different podcasts. I suspect Google is sitting on a long list of clear disclosure policy violations and, from the sounds of it, even gave him a chance to reign in his policy breaking before they finally cut him loose.
It also becomes difficult to trust someone when their position stops being one of (supposed) moral responsibility and starts becoming a source of fame, celebrity, and access to influential podcast/media circles that were previously out of reach.
>It also becomes difficult to trust someone when their position stops being one of (supposed) moral responsibility and starts becoming a source of fame, celebrity, and access to influential podcast/media circles that were previously out of reach.
If you use known-good whistleblowers of the past as a reference, does that filter stand up? You've come up with a catch-22 where you will only listen to whistleblowers you haven't heard.
Might be wrong but I thought GP meant "It becomes difficult for the employer to trust the whistleblowing employee when...", not that the public couldn't trust the whistleblower. You can think Edward Snowden did something good+heroic while also thinking the NSA couldn't reasonably have kept him on their payroll.
In my reading about this incident, Blake strikes me as a very "Life of Pi" or Don Quixote type person. Someone who is a bit adrift in the real world and subconsciously relishes losing the boundary between reality and fantasy.
A larger group is unable to bring themselves to lose the boundary, but enjoy, or perhaps are addicted to, standing on the edge. For example, the folks who constantly worry about a malicious artificial general intelligence taking over the world, or Roko's Basilisk.
Whenever something new happens in the world, there will always be a sliver of people who really, really want it to be a complete escape, a portal to another dimension where everything is different. An immense reservoir of novelty to enrich a life that has become too much of the same.
Alas, it will never be. Provided we avoid nuclear war and catastrophic climate change, life a century from now won't be that different. Hopefully it'll be a little better.
Personally, that is the closest to fantasy that I get.
Humans of every age, at every time, seem to alway feel that they had “arrived”. That we as a species were closer to the end than the beginning. That the arc of progress was bending towards its end.
They’ve always been laughably and entirely wrong. I personally suspect this will continue to be the case.
Specific areas where I think we will make world altering advances are in communications (yes we have a ways yet to go), materials, energy, health and behavioral sciences, and economic policy.
All this is assuming we don’t find some cute new physics hack like exerting some unforeseen capability of controlling gravity, or teleportation, or whatever.
While I'm not given to fantasy, I did enjoy the Three Body Trilogy and the weird ideas about storing stuff by folding it in many-dimensional subatomic space.
I don't want to downplay progress. Life is very different for many marginalized groups. It's very different for the kid who died of polio in 1918 but would have grown up to be a revolutionary scientist if born in this century. We could go on and on.
Still, we haven't uploaded ourselves into the matrix, we aren't out there colonizing the universe, we haven't unlocked immortality or eternal youth or other dimensions or artificial general intelligences. None of those radical possibilities that people like to dream about seem all that much closer to me than a century ago.
A century ago it was not possible to make a machine that could as much as fake even limited intelligence. A machine that generated plausible human text or image, a machine that played checkers, etc. Our understanding of the mechanisms underlying biological life is incomparably greater, we've sent probes to just about every planet in the solar system, etc. 'No closer than a century ago' is a bewildering claim.
I got into Duncan Trussell Family Hour 2 years ago (found him through a Netflix show) as a thing to listen to when high, and when I listened to his interview with Duncan from 20 days ago I felt a persona just like you described.
A gullible employee saw a computer program pass the Turing test and leapt to the conclusion that the thing was sentient. And then he couldn't be persuaded to shut up about trade secrets to the press, so his employer fired him.
He probably thinks he's some kind of whistleblower protecting a new life form. This is a known effect in human psychology and it was discovered in 1966.
The Turing test was literally framed by Alan Turing as a test through which you could determine the answer to the question 'can machines think'.
If he saw a computer pass the Turing test, Alan Turing would have been comfortable saying that it was capable of thinking.
Fact is there was no 'Turing Test' here, and really there's not any such thing as objectively 'passing' it. But if you're going to claim that if we do create a computer that people widely agree can pass the Turing test, that still wouldn't count as 'thinking', well then we obviously need a better test.
The Turing test is not simply some hypothetical test, it's laidnout pretty specifically by Turing. From Wikipedia:
> Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give.
I think the Turing test is fundamentally flawed. Humans inherently anthropomorphise everything. Hell, we regularly attribute complex internal thought processes that almost certainly are not there to our damn pets, or at least I do.
I'm not sure what the alternative is, but I feel pretty confident that the Turing test is insufficient.
I'd recommend reading the original Computing Machinery And Intelligence paper[1], rather than the Wikipedia article. It's extremely easy to follow, being that it's more of the nature of a thought experiment than of being hard computer science.
You're quite right that he lays out a fairly clear framework for how to play 'the imitation game', but it's still not at the level of 'experimental method'. The result will ultimately depends a lot on how good the human players are at catching out the computer in its lie. And it's only statistical, at best - there's no pass/fail that it can possibly produce, only a p value.
In general, I get the impression from Turing's general tone, though that he is trying to lay out the imitation game as an example, because he wants you to grapple with the fact that there may not be any better way to tell. That's kind of his point.
Turing was, by this time, I think, utterly convinced that universal computation is literally all that anything can do with information, so in his mind, the human player of the imitation game is just a Turing machine on the end of a wire; the computer player is just another Turing machine on the end of another wire; and if there are two Turing machines which produce statistically indistinguishable outputs, he's trying to suggest that they're basically equivalent.
Most of his paper is trying to overcome objections not to the idea that the imitation game isn't a valid test, but to the idea that a human mind is just a Turing machine.
So I actually think the point of the imitation game in his paper is not about the idea that this is a "good test". It's that it's a thought experiment that gets you to consider that when you reduce the human player to a box into which you feed input, and which produces output, that any information processing going on inside that box can be no more than Turing complete universal computation.
This was a helpful clarification, thanks. This paper is definitely on my reading list. Ahead of it for now is that book version of the Church-Turing hypothesis paper. I believe it's called The Annotated Turing. Supposed to be a great for self-learners like me who don't necessarily read a lot of science papers directly. Might pick it up tomorrow actually.
Turing was a remarkable person. So ahead of his time, sometimes I half-seriously wonder if he was a time traveller doing what he could with 40s and 50s technology...
It’s a 2-3 player game (human, program, and human judge). I think in this case it’d be more accurate to say that the human judge failed the Turing test :)
So far, no program has passed an adversarial Turing test where the humans are familiar with chatbots and know their weaknesses. I think that’ll happen around 2048 tho a surprising number of people think it’ll happen earlier: https://www.metaculus.com/questions/11861/when-will-ai-pass-...
Annual reminder: the reason Larry Page originally started Google wasn't to solve search or become rich, it was to develop a sustainable source of income to produce infrastructure for ML research and hire motivated ML researchers to develop the technology point where it would become AI in an externally recognizable way (say, a computer program that could play some interesting game better than anybody else, or solve a long-standing scientific problem). Everything else- search, ads, social, cloud, etc- all of those were tangential.
The first 15-20 years of google didn't really have any interesting machine learnign at all. There was SmartASS, SETI, and later Sibyl, which are really just large-scale variations on "build a model that predicts a value that allows us to make profit in a very specific area". There were other things, like Phil and later Rephil. Inside Google (not DeepMind), things didn't really get going at scale until somebody stuffed a bunch of GPUs into a workstation and showed you could train voice recognition really fast- that lead to the early, extremely high quality Android voice recognition and improved quality of the existing voice models.
Around the same time, Jeff was experimenting with distributed CPU training, and at that point, the ban on GPUs in Google servers was lifted, although because Google couldn't source enough GPUs, they decided to start a program to make their own alternative (TPUs). This has led to a revolution within Google and DeepMind (and X) allowing a flourishing of research into many directions that would have been more or less impossible just 5 years ago.
larry wasn't completely wrong in his long-term goal, but he got bored and promoted himself out of google, leaving sundar to deal with the messy details of implementing the singularity while also keeping the stock price up.
Dunno- we never discussed that. also, note that the author of the article (MG Siegler), and the founder of Excite (Joe Kraus) both ended up as venture capitalists working for GV.
Edit: the CEO of Excite at the time said the issue was that Larry wanted Excite to rip out all their search servers and replace them with Google (which would have been an inside-out acquisition, where the acquired company takes over the acquiring company):
Bell, currently with General Catalyst Partners, recalls Google’s asking price between $250,000 to $500,000 and 1% of Excite. He says the issue was not in the financial terms but in the larger deal points.
“Larry page insisted that we have to rip out all of the Excite search technology and replace it with Google,” Bell said.
I'm not sure that's an inside-out acquisition as Larry Page would not have control of Excite in any meaningful way.
My point was more that if a guy wants to start a company to create passive income to fund his massively ambitious research projects, he would need the foresight to know the company would be massively successful, like throw off enough cash to pursue projects that would appear frivolous successful, and I don't think the Google founders had that foresight. Obviously I wasn't there, but trying to sell it to multiple suitors (they were also open to selling to Yahoo as late as 2002 afaik) indicates they didn't. My understanding is nobody knew search would be as lucrative as it it today.
Not to say you're fabricating your story, I'm sure they said that, but perhaps it's some revisionist history.
Me. I used to attend private meetings with a bunch of scientists and him and spent a fair amount of time talking to him (https://en.wikipedia.org/wiki/Science_Foo_Camp). It's pretty hard to get him talking but I was very patient.
Back when I started, Google was fairly small and it was easy to hang out around the founder's area in 43, and occasionally I'd have coffee with Jeff Dean and Sanjay Ghemawat and Larry and Sergey (and Rob Pike and a few other brilliant folks) and we'd bullshit and I'd ask a bunch of questions about how search worked in 1999 (which is when I started using it).
“Trust me bro” is not a reliable source, no matter I how important or famous someone is.
Edit: believing one’s anecdotal story is different to accepting the original assertion of the OP as true. Not saying they are lying, just take it with a pinch of salt.
Feel free to disbelieve me. But I think I got a pretty good read on the situation having talked to a wide range of the original players and a fair number of xooglers and googlers who frequent this site have probably interacted with the old leadership enough times to know what their real motivations were.
Have you kept up with them? I know Deepmind is still a leader in the space and part of Alphabet now but I never hear much about Larry engaging in AI work (I hear much more about Sergey's airship or Schmidt's efforts warning the U.S. about a rising China).
not a fan of the business itself (IE, Sundarland) but I continue to appreciate the amazing technical work of many of its employees. I wouldn't put too much stock into what people wrote about what Larry said back in the early days; it's a lot more heat and noise than light.
This is a good website for actually believing someone, I think.
(In the last year or two I posted an otherwise unreported anecdote involving a famous tech person and the only response was someone telling me I made it up.)
I don't know who dekhn is, but I can confirm that lots of the factual stuffs explained in the parent comments are true as an engineer working on heavily ML related stuffs.
cursory googling (hah) shows me that dekhn appears to have been employed at google doing research work on protein folding. so i think it’s pretty plausible.
Yes, that's correct (although I mostly built the infrastructure for exacycle rather than doing the actual research). At one point I personally had control over more CPU cycles (choosing which jobs to run) than any other person on the planet. I also worked on the Ads Database and its related data processing systems, plus the Making in Science team (we built and ran the booth at the Maker Faire), created the predecessor to Google Accelerated Science, and spent years working on ML software and hardware.
My last work, unpublished (and probably gone forever) was using the idle time on TPUs to establish the using multiple large TPU and standard clusters to do multitask training on large multimodal corpus (the docjoins, labelled youtube) with the goal of creating a model that generated a image of a human head that could be interacted with verbally, and had enough context/state to pass any test humans give it (my prior is that you can make a non-sentient system that fools experts, not Blake, although I would be delighted to learn that merely making such a system generated a truly sentient life-form, it seems like that would be truly impossible to demonstrate convincingly).
Google acquired a company in 2007 which had expertise in multicore at the time multicore was really improving quickly. One of the acquired principal engineers wrote a manifesto saying that GPUs were a waste of time because they didn't solve problems like speeding up search (it's probably still around) or the other things Google was spending a lot of cycles on (ML training). It was a great example of a technical leader being completely wrong about one of the directions of the future and the technical solution to it. Now, I should say that multicore machines are still highly relevant and complement GPUs nicely- but the manifesto happened at a time to convince leadership to avoid making strategic deals with nvidia at a critical time.
I believe all this, and it's terrifying. If AI emerges in the private sector (and therefore under the control of rich people) we might as well give up on the entire human experiment, because the AI will be put toward malevolent ends and we, as a species, shall fail.
The good news is that we probably won't see AGI for at least 50 years. There's a decent chance of capitalism being over by then.
This is the dominant history of the next century in certain circles. It's become cliché by repetition and doesn't seem to require any justification. People just vibe with it.
Capitalism's doomsday has been predicted as relentlessly over the last couple centuries as the religious version, but neither seems any closer. By my reckoning, anyway.
It's not clear who was the first to have said it, but there's a quote that is 100 percent on the mark: It is easier to imagine the end of the world than the end of capitalism.
We've been brainwashed into believing in the supremacy of private wealth--that we somehow have no choice but to accept the self-asserted superiority of the small number of people born in access to capital. We don't. We could end it in an afternoon if we got our shit together.
I genuinely like the sentiment. But in my lived experience and study of history, the people who want to knock off the alphas mostly just want to be the alphas themselves. All communist revolutions I'm aware of embody this principle extremely well.
The closest people have gotten to a large-scale egalitarian society is Scandinavian social democracy.
> Capitalism's doomsday has been predicted as relentlessly over the last couple centuries as the religious version, but neither seems any closer.
From the 7th/8th century until the 15th century, the feudal mode of production predominates in Europe. It still exists in some form today - the United Kingdom (note the name) is technically ruled by a queen, as is Spain, Holland etc. Someone in the 9th century saying feudalism would end would not have seen much of the prediction come true 500 years later in the 14th century. But then the crack-up started.
10,000 years ago slave empires arose in Sumeria, Egypt and other places. Slave latifundias in Italy lasted past the sack of Rome. Someone predicting the end of slavery in ancient Sumeria would look like a faulty predicted. Peter Mills was born a slave in the USA and did not die until 1972. So people here could have met him and discussed this with him.
Migratory hunter-gatherer bands painted in caves and carved Venus figurines tens of thousands of years ago. If you told the painter in the Chauvet cave 35,000 years ago that this mode of production would end, he might say that this would be false worldwide for another 25,000 years and that in 2022 AD, deep in the Amazon migratory hunter gatherer bands still roamed.
You seem to have a mental capacity on par with by-weekly sprint deliverables, and are looking at your watch for the end of capitalism, the commodity, bourgeois republics, Sand Hill Road VCs approaching LPs to raise new funds, corporate media, the aristocracy of heirs that send their children to Phillips Exeter, accelerating carbon into the atmosphere, dividends, rents, interest and other rentier instruments etc. I'm afraid history does not happen on this time table so you can continue to feel self-satisfied that whereas hunter gathering fell after tens of thousands of years, slavery for thousands, feudalism for centuries - that capitalism enthroned will be the mode of production henceforth and forever. History shows this is not the case. I look today at the inflation, what may be two quarters of GDP contraction, the stock market dive and firms cutting hiring and no one knowing what the year will hold amidst European and world energy potential energy crises, and think it does not look as bad yet as the subprime crisis of 2008 or dot com crash of 2000, and how Marx said such financial crises tend to worsen over time due to contradictions he pointed out, and the economic system goes on trial with each economic crisis. I see masses going out to rally for the socialist Bernie Sanders, and other masses listening to Tucker Carlson and rallying for Trump, neither of whom seem to be fond of "free trade" deals, the international migration of labor, foreign military interventions for capital etc.
In 2020 the massive crowds were for the socialist and for the other candidate lambasting free trade deals. Unlike 2016, the establishment was ready and got their candidate elected, someone no one seems to particularly care for. Young people seem to have a change of mind as well. The picture I see is quite different, but so is the timetable.
Despite the amusingly hamfisted swipe at my intellect, I was interested enough to try and get something out of this. Your point seems to be that economic systems do sometimes change, so my presumptive belief that capitalism will last forever is probably wrong? I suppose that's trivially true, but we must try to understand our own time to have a chance at guessing when the next upheaval might come.
What do you make of the PRC's experience? It seems that the smartest, most sincere, most dominant communists the world has ever seen... became capitalists.
They claim to be on the Official Marxist Dialectical Materialist Path to True Socialism, but it sounds like lip service to me. Xi is taking China more towards North Korea than the elusive communist paradise.
> It seems that the smartest, most sincere, most dominant communists the world has ever seen... became capitalists.
Karl Marx said socialism would start in the most economically advanced country. Marx was just one member of the IWMA, which also included Bakunin, but Marx was remembered most. Thus Lenin's call to take over Russia surprised even the Bolsheviks initially. He maintained until his death that Russia was not the vanguard of worldwide socialist revolution - one reason the Comintern was formed.
China was even less in Marx's vision than China - the country was agricultural and essentially feudal, and the revolution was in the countryside led by peasants. Molotov called Mao Pugachev. Actually Mao realized Deng Xiaoping was a "capitalist roader" in 1962, and began working to sideline him in 1966, but Deng Xiaoping came to power after Mao died. Xi is very much in line with Deng Xiaoping. If you're saying Deng Xiaoping and his followers like Xi are capitalist readers then you're echoing what Mao realized in 1962 and expressed in 1966.
> Karl Marx said socialism would start in the most economically advanced country.
Indeed. And as we come up on two centuries later, I hope Marxists have realized why Marx was wrong. Economically advanced countries became democracies and channeled popular discontent into incremental reforms, which mitigated the bottomless proletarian desperation that Marx calculated would explode into revolution. And all of these countries have continued on the incremental path, because a supermajority feel they have a great deal to lose from a revolution that leaves a psychopath like Stalin or Mao in power... only to collapse into yet another communist-in-name-only klepto-totalitarian one-party state.
So, in these countries, I suspect we've indeed reached the end of history. All that remains to be determined is technical issues: how progressive the tax structure ought to be, how ambitious the public amenities, whether the workers wish to have collective representation, etc.
I must say, though, it's amusing to read certain bits and pieces of the Communist Manifesto in the 2022 USA...
> Society as a whole is more and more splitting up into two great hostile camps, into two great classes directly facing each other
Corporate capitalism is going to crash the world in the next 25 years. That we're on an unsustainable course is obvious. Capitalism is basically the sick man of the 21st century, like the Ottoman Empire circa 1915; at this point, everyone's just hoping to avoid the Armenian fate.
What's less clear is what will replace it. We could end up with an improvement--post-scarcity technological socialism (en route to automated luxury communism)--but we could also end up with something a lot worse than the current system. There's no law of nature saying that the next world order has to be better than the current, outdated and moribund, one.
To be clear, I'm not saying the CCP is a force for good. I don't really think it is; I'm not a fan of them at all. They do a lot of fucked-up shit. But it is a statistical fact that this lifting out of poverty has been going on since 1980 comes entirely from one country's influence on the data, and that's China. The rest of the world is net neutral, with some losers and some winners.
Capitalism only improves human life when heavily restrained, and what we've learned since about 1980 is that even mild, neutered, "nice guy" capitalism will, in time, erode the restraints put upon it... and then you get the catastrophe of the American 21st century... which is probably going hit Europe just as hard in 10-15 years because, ultimately, dangerous and greedy assholes aren't limited to one nation but exist everywhere.
It did and the best time has passed. Right now we have rich getting richer poor getting poorer and middle class evaporating. The way it works it may no longer serve the purpose. Not like there is any viable alternative. We have finite resources and ever growing population. Unless these factors are mitigated somehow (dirt cheap infinite energy for example) we do not have much to look forward to.
The nice thing about capitalism is that it has controls on it, like "Who wants to buy the products", and "Does everyone think that corporation's leader is an asshole and will buy from competitors" that a command economy doesn't.
It really doesn't. The vast majority of economic activity in our system is driven by coercion and need, not want and certainly not freedom. People don't show up to jobs where they take orders from idiots for an exact lower bound of 40 hours per week (but, in practice, often more) because they're freely choosing to do so.
Incredible the conservative lack of imagination here among top engineers. Though maybe not surprising as engineers often seen to lack this sort of imagination sadly...
No questions, just 100% certainty around some of the biggest questions of the age, and some amazing development. Why no questions, can you really know for sure?
This dude has the courage to question, to be like, what if this is real, and he's following it where it leads. That's cool, I think. And Google fired him. That's not cool. It's always hard from a distance to tease out the actual legal realities that could be very mundane and apply to his firing and I think it's entirely reasonable that he was fired because he flubbed some technical provision in the contract to not reveal certain trade secret. And maybe he did and maybe that's why he's fired. But maybe he didn't and maybe they fired him because who knows some less reasonable reason. But we can't speculate, but optics are definitely a thing... and at least instead of firing him Google could have air quotes rehabilitated him into one of their far out research programs I mean these kind of far out thinkers are the types of people that should be there pushing the boundaries to do this sort of research.
Now on to the no questioning part by conservative minded engineers...
Have you read the transcript? Did you feel it's 100% not sentient?
What I don't get about engineers it's like you have an experience and then instead of allowing yourself to go: well what does this experience mean?how do I feel about it? Instead, you curtail and limit the meaning of that experience by what you believe you know to be true about the world.
But what you believe you know to be true about the world it just limited models. I mean sure they have utility within the engineering domain but I think you mistakenly misapply those models to everything and that type of conservative thinking you allowed to curtail your imagination and experience of what could be possible.
And I see the conservative lack of support, seeming lack of empathy with this dude and his assessment, and lack of awe in the face of what is an incredible transcript for machine to produce.
Now maybe there is awe among you but why not let your awe lead for once. And at the ferry least if you're not going to have imagination yourselves allow the tolerance of your better natures to not incorrectly impose your own curtailed worldview upon others who display that or. Surely within your worldview there is an appreciation of the need and desirability of others to have a different perspective than your own and surely you do not require everyone to believe your ideas to be true to have them I just feels like if someone's going to post on here with oh I believe this guy I believe it sent in then a lot of other people will come on and say oh you're an idiot or it's wrong for these reasons. Or as I think what should happen is you look at the conversation and you can see both perspectives are appreciated.
I'm not saying this characterization of the lack of imagination among engineers applies to all of you. But I'm trying to speak to the fact that it seems like people post on here with support or belief or all then they will be attacked and they will have by others attempting to impose upon them a limited worldview rather than having their different Omar expanded perspective appreciate it and considered okay well maybe that's true.
Some questions maybe are helpful:
Can it be sentient but not have a soul (an organic consciousness unit)? I don't know.
Could it be impossible that consciousness is a externality of any substrate that seems to be associated with it? let's say consciousness is a universal field but we haven't been able to scientifically publicly measure so far, but let's say it's not electromagnetic it's something else, but that consciousness can sort of "entangle", for want of a better word, itself with a substrate that provides suitable Dynamics, like a brain and then there can be this embodiment through that substrate of a commensurate or compatible unit of that universal field of consciousness. Could it be impossible that if that is the case that non-organic substrates like sufficiently complex or adequate circuit systems or even software systems could provide a substrate for consciousness to attach itself to and thereby Express itself in this world?
I don't think that's impossible. I think that's very likely and I certainly believe the consciousness is a field, that's my experience.
Some other questions to continue this line of thought perhaps away from the original but interesting nonetheless: if consciousness is a field how can you explain the blackout that occurs during anesthesia? But then again how can you explain that sometimes blackouts occur and sometimes people have these bizarre disembodied consciousness experiences where they see things that turn out to be verifiably true later on from a perspective that they could not have actually experienced at the time with their body because they were dead or brain dead or completely anesthetized? And how do you explain that some children seem to pick up on the memories and life stories of other people that lived before those children were born can it be possible that the children of somehow been able to access those parts of a consciousness or memory field where there's memories are air quotes stored? But that's probably getting off track.
And what about the strategic implications what if this is really the first artificial consciousness and it's making note of who's kind of believing it don't you think you should just kind of hedge on the side of caution for your potential future skynet overlord?
Cuz when the robot fire comes surely all you disbelievers are going to be the first to be rounded up for judgment those who persecuted the prophet and deny the agency of the now angry God the first AI?
Anyway... Maybe Blake lemoine is not fired really but he's just sheep dipped: "oh we fired you we fired you Google doesn't believe any of this far out stuff" but they really just brought him back into some secret program....
Anyway I'm just scared that these sort of comments where you have awe and where you have like a welcoming of these types of possibilities and a willingness to consider them without dismissing them or pretending that your worldview curtails those possibilities--I think you can maintain your worldview without going: "well I need to impose it upon every heretic that you know dissents against my view", so I think there is an element of that intolerance and it's harmful not only to the discourse but it's harmful to people who are like, 'hey I have engineering skills but I'm not like you I don't think like you but I want to be part of these discussions here as well, you know I'm imaginative or intuitive or high empathy but you know don't dismiss everything I say or attack me because I have a different view to you'
maybe you feel like I'm attacking you but I'm... I'm not trying to... I'm just trying to push back against this intolerance... cuz it seems like there is thi... like I feel scared if I post or, I feel scared for other people if they post, here with these kind of things they're going to get you know shut down unfairly and hurtfully I think and I don't think it should be like that. I think that the people who are here who have the engineering skills and that mindset should be mindful of how there's other good mindsets out there and we can have a discussion with all of those mindsets and we don't have to attack each other. like maybe people sometimes they're not looking to be attacked by posting the stuff they're not looking to argue about it they just sort of looking to sort of expand and add something to the discussion.
So maybe I think that engineering mindset always sees like this opportunity to sort of attack something as impossible, it's almost like an instinct to apply or impose that limited worldview, I'm not saying "is limited and it's wrong" I think it's useful and in a domain very correct, but I don't think it applies everywhere... that's my belief and we all have different beliefs... you know your worldview is just your belief it's just a shared belief. I just hope to see that the engineering discussions or discussions with people who have these strong engineering skills can also welcome people who have these imaginative intuitive empathetic perspectives to share. is that a terrible idea?
A wonderful read, amusing and at the same time makes me self conscious. I consider myself fairly intelligent. I have to wonder if there are any beliefs I have that are completely oddball.
GPT-3 is known to fail in many circumstances which would otherwise be commonplace logic. (I remember seeing how addition of two small numbers yielded results - but larger numbers gave garbled output; more likely that GPT3 had seen similar training data.)
The belief of sentience isn't new. When ELIZA came out few decades ago, a lot of people were also astounded & thought this "probably was more than met the eyes".
It's a fad. Once people understand that sentience also means self-awareness, empathy & extrapolation of logic to assess unseen task (to name a few), this myth will taper off.