This is a bit of a stretch, but the end results from either manipulation technique would be comparable if they were meant to skew the truth the same way. However, that sounds stupid as shit when I read it back, but I'm not entirely sure why.
I think a use case for AI image manipulation could be more like if I need a picture where I'm poor but wearing smart borrowed clothes, standing with an unassociated associate and a dead alive, with a backdrop, with the only source image beimg selfie of someone else that incidentally caught half of me way in the background
The intent or use cases for these two (lacking a better term) manipulators aren't orthogonal here. The purpose of AI image generation is, well, images generated by AI. It could technically generate images that misrepresent info, but that's more of a side effect reached in a totally different way than staging a scene in an actual photo. It seems like using manipulation to stage misleading photos would be used primarily for the purpose of deceptive activities or subversive fuckery.
Agreed. My point was that trusting images ('seeing is believing') has always been at issue whilst we might imagine it is a new thing, the scale of the issue is different -- phenomenally so -- but it's not a category difference. Many people were convinced by the fairy hoaxes based on image manipulation in the early 20th Century (~1917). They fell for it hook-line-and-sinker, images made with ML weren't needed.