I think it's going to become a cat and mouse game. AI generated images (e.g., deep fakes) are already being used in very nefarious ways such as job interviews, applying for gov't documents via video, etc.
Researchers are finding ways to identify the tale-tell markers that currently give them away, but yes, for the neophyte this is going to be a real issue on what can I trust.
However, the great thing is that you will always have data...the challenge will then become how well do I TRUST my predictions, which I believe will spur some very interesting algorithms such as anomaly detection (i.e., the RGB distributions, spatial-markers, etc are way too distorted if I compare metadata from other pictures of this type).
Researchers are finding ways to identify the tale-tell markers that currently give them away, but yes, for the neophyte this is going to be a real issue on what can I trust.
However, the great thing is that you will always have data...the challenge will then become how well do I TRUST my predictions, which I believe will spur some very interesting algorithms such as anomaly detection (i.e., the RGB distributions, spatial-markers, etc are way too distorted if I compare metadata from other pictures of this type).