I'm not a developer, but what can be done in the future to prevent this from happening on a wider scale? Have search engines index verified photographs from key news + historical sources? Do they mark down results including "AI generated"?
Do we treat the internet like low-background steel[1] and prioritise anything from before 2020?
My experience with something similar was seeing an image someone posted on a history subreddit early in the year, apparently it was a picture of a prominent Indian ruler from the 1800s. If it wasn't for someone saying they had never saw a photo of them before, the OP wouldn't have chimed in and casually said it was an AI generated image. But from here on out I can imagine people stumbling onto that image if they simply searched him online, get a glimpse, and have no idea it was fabricated.
Do we treat the internet like low-background steel[1] and prioritise anything from before 2020?
My experience with something similar was seeing an image someone posted on a history subreddit early in the year, apparently it was a picture of a prominent Indian ruler from the 1800s. If it wasn't for someone saying they had never saw a photo of them before, the OP wouldn't have chimed in and casually said it was an AI generated image. But from here on out I can imagine people stumbling onto that image if they simply searched him online, get a glimpse, and have no idea it was fabricated.
[1] https://en.wikipedia.org/wiki/Low-background_steel