Many of us has already experienced reading an article we believed to be of human origin, but in fact was written by an AI. As many posts on Twitter and the Fediverse has shown us, AIs can be hilariously inaccurate – it will happily tell us what it thinks we want to hear and invent stuff as it goes.
In this environment, where it gets harder to discern the veracity of information online, be it from newspapers, magazines and other types of information outlets, establishing trust becomes increasingly valuable.
Will this fact make us seek out more personal content from sources we can put a face and a name to? A personal web of trust that sharpens the line between content made by humans and AIs?
It might be to early to call it, but having pondered on these questions for a little while, it made me realize that my solution to this sea-change is to double down on collecting channels of information that verifies ownership, like personal websites, preferably on a domain name owned by that very person.
Maybe we could even make some kind of pact, a code of honor, a personal promise of integrity, that guarantees a site to contain 100% human generated content?
I've been mostly doing this for years now, because of the general decline in the quality of websites that has been going on for a long, long time.
> A personal web of trust that sharpens the line between content made by humans and AIs?
The problem, of course, is what you alluded to -- how can we know what's made by humans and what's made by AI? I think that my already limited use of the web is likely to decline to nearly zero because of that issue.