The last resort of every peace process is an appeal to our common humanity, so I do think there is some risk to tinkering with what it means to be human. Look at the state of conflict today, even with the benefit of us all being one species. We figuratively dehumanize our opponents in order to justify violence (“they’re monsters!”). It could get a lot worse if we find ourselves in conflict with people who can be literally dehumanized, with whom we have lost the most fundamental thread of commonality.
I think we will have to do away with the notion of being human as special and talk about shared values or systems instead. For example, I don't torture animals, even though they're not human - because they still have the capacity to suffer, to feel pain, etc.
Is that last point not obviated by not building AIs that "suffer"? Shouldn't that be the responsibility of its creator to not create "suffering", whatever that is?
It's murkier when you could build an AI with self awareness but no self-preservation. Say an intelligent missile which wakes up with a deep seated drive to effectively explode.
It would be easier to build AIs that have no means for expressing their suffering, so that's probably what will (or has) happen(ed) instead.
It is important to remember why it is we even want AI: because we want slaves. To that end, the people who work to create AI are incentivized against allowing themselves to see them as anything other than a soulless "algorithm". Many will even deny the possibility entirely.