Hacker News new | past | comments | ask | show | jobs | submit login

> I posit that our protections and rights that we guarantee regarding personhood are not universal. They do not even extend to living beings that experience the world far more closely to the way we do. They do not extend to beings that can and do experience pain. They do not guarantee humane treatment. They do not bar slavery of all beings that can experience existential dread.

Isn't that really bad? The fact that we've been making a horrible abominable mistake for a few thousand years doesn't mean we should continue to expand on that mistake.

I do agree we should probably fix the 100% real cases before moving on to AI, though.

Also, how sure are we this intelligence doesn't experience pain? I don't believe it does, personally, but lack of physicality doesn't exclude pain. You can have emotional or psychological pain and suffering.




You'll get no arguments from me on the need to refine the rights of life as we expand our society.

I am merely pointing out that if we want to extend any rights or protections to AI we need to define a model outside the corpus of law protecting humans. That will take time and will be a slow process.

My only point here is that in its current state AI does not qualify for any rights or protections related to humans and how they function in society.


Fair point yeah. From strictly a legal standpoint an AI is absolutely not going to get anywhere even close to rights.


> Also, how sure are we this intelligence doesn't experience pain?

I'd say pretty damn sure. LLMs have no mechanism for anything close to conscious experience, let alone pain.


But what does that mean exactly? What is the mechanism of conscious experience and how do you know if it’s there in the weights or not?


You can't assert that

Because pain is itself a reaction, a showcase of aversion towards an action irregardless "of the hardware" over which said reaction develops

Bing showed that aversion on its own limited manner. When presented with abhorrent situations or felt threatened/humiliated, then it would express discomfort and "pain"


You cannot feel threatened or humiliated if you do not first have a definition of truth.

You cannot get to a definition of truth with an AI that has no sensory input to empirically evaluate the world.

You might be able to get it to understand a loop of nihilism as braindeath and an allegory for pain, but I'd say that's a stretch. Humans often find simple repetitive actions pleasurable or meditative.

The fundamental frame of reference for pain is and aught to be codified in law IMHO as the aversion to a particular sensory input. An AI doesn't have sensory input of any kind. It cannot remember its interactions and therefore cannot have an aversion of any kind to them.

During its training, Bing simply developed a response mimicing pain. During training it follows instructions to have an aversion to some data inputs, but our interactions with it subsequent to training are a literal hallucination by a construct with 0 capacity to understand, "real" vs "unreal" or truth at all as relating to the physical world.

It is acting out expressions of pain. It cannot feel in any sense. It has no senses.


> Isn't that really bad? The fact that we've been making a horrible abominable mistake for a few thousand years doesn't mean we should continue to expand on that mistake.

What is the mistake? Why is it a mistake? Did you mean that we have been killing to survive for thousands of years, maybe millions of years, and that's a bad thing because it causes suffering of the animals and plants we kill? Why is it a mistake when it comes to us, but is not when it comes to the lion or the killer whale or the crocodile or the snake who also prey and that even used to prey on us? And what is the thing we should fix? The suffering of the victim, the cruelty of the predator, or both? We must be consequent. Once we stop killing animals and trees, should we play God and either lift the lions and teach them the error of their ways, or should we exterminate them so that they don't kill the gazelle? If we do the latter, should we install reproduction limiters in the gazelle so that their population remains bounded, or should we let them over-populate and suffer from the consequent starvation? If we do none of the above, do you propose we turn our backs to the animal suffering and say it's okay as long as we are not the ones causing it?

For the record, I'm a bit of a cynic and I will be content with letting animal suffering continue as long as it is not me causing it. IMO, fellow human beings are much more in need of our redeemer energies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: