I think this kind of thing really does depend on whether you're dealing with a small startup or even an individual human that you're screwing over, or a giant implacable machine like a multinational corporation.
There's no obligation to be nice to the machine, it can't recognise it, and it won't be grateful to you.
+1. The machine analogy is excellent. You have moral obligation to humans. You may decide to have moral obligation to society if you like the one your in and want it to grow. But you have zero obligation toward a souless entity. It's a robot. Optimized for profiting from a service it failed to perform adequatly.
Agree on the moral obligation to humans, but think it's best to keep the focus on not having a moral obligation to corporations (beyond T&Cs or contracts you willingly consent to). Not that you're wrong with the "souless robot" concept, but with the advances in AI, I wouldn't the first one to play out like Chappie.
"Why you humans do this? Why you all lie?" - Chappie
Have you seen "the good place" ? There is a hilarious part where "janet", their omnipotent and omniscient anthropomorphic IA - with basically no concept of suffering - triggers a defense mechanism where she pretends she is scared to die.
There's no obligation to be nice to the machine, it can't recognise it, and it won't be grateful to you.