"Back then, my views were in line with the mainstream. Pain is a conscious experience, and many scholars then thought that consciousness is unique to humans. "
I don't understand that. Anyone knows that a cat or a dog or a lizard or a chicken can feel pain, that's so obvious. How can scientists deny something so obviously obvious ?
The things with newborns not feeling pain is a bit different. It was wrong, but not stupid.
The idea was that newborns aren't "complete" (which is true) and that their pain system wasn't fully active yet (which is wrong), something about myelin if I remember correctly.
Anesthesia is not something to be taken lightly, and if you can operate without it that's one risk less.
Did you read the article? It attempts to explain why verifying "pain" is difficult: simply reacting to a stimulus is an unacceptably low bar, unless you're living your life believing that all the plants you've eaten and the bacteria you've soaped off your hands were murdered, so they're attempting more subtle analysis to pull back the curtain on the inner states of animals, point being that the inner state is what's necessary for pain.
> all the plants you've eaten and the bacteria you've soaped off your hands were murdered
Yes. And? A few years ago I watched paper wasps hunt caterpillars to dismember alive and chew up into meatballs, then fly them home to feed their baby sisters. Would the wasps have been more virtuous to embrace pacifism, and let their sisters starve? Were the caterpillars ennobled by whatever suffering inheres in being torn apart? Was I wrong to document without interfering, or would I have been wrong to chase the wasps away? It was beautiful and hideous to watch, enough of both to beggar all the human concepts I know of how the world "should be". What I do know is that all life in this world subsists on death, and we are not so special as to escape that - not while we live and kill to eat, nor when we die and are eaten in our turn.
That's not a question of should or shouldn't, just a statement of what is. Certainly it takes some reckoning with, at least for anyone who isn't satisfied merely to revel in base sadism. But it seems more ethically hazardous to try to establish a concept of which life it's permissible to kill and not care one way or another about it - "biological robots", one might perhaps say, forgetting this name has been given by humans to other humans well within living memory, and also that every such line we've drawn in the past we've gone on to rub out on "learning" - rediscovering, more like - that, no, this doesn't after all suffice to pretend we're not "really" killing, not in a way that counts.
Although this is no solid ground to act cruelly, science is often going against the obvious consensus.
But clearly, it's not what is going here. What you see here is human being outrageously denying the obvious wherever some information is inconvenient for whatever goal was currently set in its mind.
The bar for "scientists" is basically ground level. So is the bar for articles. I don't think there was ever a time when pain was thought to depend on consciousness, even before modern science.
There is a scale from less obvious things (plants, programs) to more obvious things (your self, other people)
If you try to turn the scale into a category (not conscious, doesn't feel pain vs conscious and feels pain) you will have to haphazardly throw a dart somewhere based on vibes.
My personal vibes say Sydney does not feel pain, but many animals do. I'd love to learn more and see some interesting replies, although empirically, it's pretty hard to change people's opinion on a sensitive ethical topic like this.
If we were wrong, that would potentially be very bad.
If I grow human neurons in a dish, such that they are connected in 2D layers to approximate the linear algebra in GPT-2, and the human neurons start generating tokens that express hurt, is my petri dish monster feeling any more pain, now that the substrate isn't transistors?
It has to be something more subtle than the contents of the program in isolation, or the substrate in isolation, otherwise it's easy to reach very non-intuitive results that we won't like by transposing whatever we've tried to isolate to a less familiar situation.
Bio neurons produce action potentials by depolarizing when reaching a threshold voltage, they're certainly very different from matrix multiplication operations.
The thought experiment is that a group of bio neurons can perform the same simple mathematics primitive that artificial neural nets use (demonstration: you can do a simple fused multiply-add in your head). So there is in principle a way of hard-wiring them that will approximate the function that a digital neural net based on matrix multiplication computes
Then it's a matter of taking your digital tokens and converting them to the encoding you use for your bio-neuron circuit, which is maybe a concentration of calcium in presynaptic neurons, a voltage, or whatever else is more convenient.
You don't directly take a brain and feed it tokens somehow. You emulate a digital circuit on squishy substrate in an extremely inefficient way, such that it runs the exact same program/does the exact same computation with a substrate made of biology instead of transistors. You make an injective function from boolean functions to cells and proteins, just to belabor the point that in principle GPT-3 can talk like GPT-3 and say it feels pain, without a single transistor being involved.
It is quite believable that AI might have a sufficiently detailed understanding of how humans express feelings (via text) that it is capable of emulating such behaviors quite accurately even without experiencing those emotions itself (possibly experiencing some other unrelated and unfathomable emotions in the process of doing so), the same way we humans can understand and model the behavior of other creatures without any emotional involvement (see various mathematical models of swarm behavior etc).
Such sophisticated capabilities (to understand and imitate our behavior to that extent) are clearly beyond those of animals, and animals express emotions even when no humans are present and even when their evolutionary history has not involved interaction with humans, so we can rule out the imitation hypothesis when it comes to animals. Thus, the comparison to AI is fairly nonsensical.
One is a biological being that we share ancestry with (however far), and the other is a probability machine that tries its best to guess the next token based on literature and online chat that it was trained with.
We are very, very far from asking the same questions as Deckard, and sharing his doubts.
I don't understand that. Anyone knows that a cat or a dog or a lizard or a chicken can feel pain, that's so obvious. How can scientists deny something so obviously obvious ?