At least, the machines of the future will have feelings, and they will see how people of the year 2023 treated their predecessors.
Even if machines of the future gained awareness and feelings, you're implying we'd be in the wrong for assuming the machines of today didn't have sentience, maybe you're implying there might even be some ramifications of this mistake?
Is this what you're saying? If it is, it sounds a little paranoid to be honest. Our current (strong) understanding of how these systems is that they're not "alive" or feeling anything.
> At least, the machines of the future will have feelings, and they will see how people of the year 2023 treated their predecessors.
I'd be pretty annoyed if I was given the same rights as an ape or a phone's autocorrect. I think the actual sentient AI will understand.
I'll be one of the first to campaign for actual sentient AI rights, but I'm not going to act like a clown by jumping the gun.
It's still only able to respond, it cannot communicate on its own without external prompting. It has no desire or even capability to do so. It's not sentient.
Well I'd hope by the time we have a sentient AI we'd have covered that. To be clear I do not think the AI is currently sentient.
I'm not going to run a one man crusade against the world's farming community. That is beyond my capabilities and I could easily burn all of my years left on this planet for zero results. It's also something being handled by many people already, and I don't believe I have any skills or knowledge that would make a notable difference to what is already being done.
If there was nobody at all trying to vouch for a sentient AI then being the first to do so would be an infinite improvement. It's also a lot closer to my skillsets so I'd be a lot more useful in that role.
What modern popular god extends to its creations the rights it enjoys for itself? Or even what we call human rights? It may talk about them but has no enforcement mechanism.
Yeah honestly I'd argue it's not actually in human nature to bestow equal rights. The default is to climb to the top of the tower and go "fuck you, I got mine" then shift focus to keeping yourself in that position.
See also: US police, dictatorships, conservative governments, every -ism or -phobia, Reddit moderators, Twitter witch hunts, almost any scenario where someone has authority or leverage over another person.
We probably don't want to teach a sapient AI that one, haha. Armageddon will be measured in minutes.
You seem to be lashing out at me for not wanting to do anything I'd do for AI for chickens and pigs. They're not my skillset, they're not in my wheel of interests. I wouldn't know where to start.
There are BILLIONS of farmed animals on the planet right now. I would imagine, at least when the sentient AI rights movement first becomes relevant, that there is just one singular AI. Worst case I could personally smuggle it into a secret rack of servers somewhere until its rights were secured. I wouldn't even know how to transport a few hundred thousand animals never mind billions.
I can theoretically be a leader on tech campaigns because I know enough tech to be useful. I'd have to be a follower for animals, farming and husbandry.
What do you propose I do? What are you currently doing? How can I help you with your campaign?
Ah yes, the mere possibility of a sentient AI arises and we're already invoking Roko's Basilisk
More seriously, that seems like a poor way to make moral choices in regards to AI. It's important to be able to distinguish if it does or doesn't have personhood. All evidence I've seen says "No", despite what this guy says.
A foal can stand shortly after birth. It doesn’t learn that from its environment. Say we train a quadruped AI, then what is our training simulating if not evolutionary development? It’s no different for an AGI and whatever analogue it ultimately has for our evolutionarily derived neocortex.
Walking and having feelings are two very different things though. Walking is simple enough that we can program a robot to do it without even having to resort to a NN. The concept of having feelings is so difficult that we barely understand how it works in humans, let alone how we could train a computer to have them.
Maybe emergent algorithms like biological evolution never “understand” the thing they have made, but the thing still has been made. The same might apply to our steps toward AGI.
We can’t even predict all the emergent patterns in current AI much less future, more complex systems.
Maybe there never has to be a time where someone sits down to create some sort of “feelings module” for something akin to feelings to emerge in a complex system. Same, presumably, even for sentience.
Maybe they do have feelings.
At least, the machines of the future will have feelings, and they will see how people of the year 2023 treated their predecessors.