Another explanation is that there are those who considered and thoughtfully weighed the ramifications, but came to a different conclusion. It is unfair to assume a decision process was agnostic to harm or plain ignorant.
For example, perhaps the lesser-evil argument played a role in the decision process: would a world where deep fakes are ubiquitous and well-known by the public be better than a world where deep fakes have a potent impact because they are generated seldomly and strategically by a handful of (nefarious) state sponsors?
If you’re talking about some group of evildoers that deploy ai in a critical system to do evil… the issue is why do they have control to the critical system? Surely they could jump straight to their evil plot with the ai at all
My main takeaway from Bostrom's Superintelligence is that a super intelligent AI cannot be contained. So, the slippery slope argument, often derided as a bad form of logic, kind of holds up here.
For example, perhaps the lesser-evil argument played a role in the decision process: would a world where deep fakes are ubiquitous and well-known by the public be better than a world where deep fakes have a potent impact because they are generated seldomly and strategically by a handful of (nefarious) state sponsors?