I was musing late last year while reading Twitter and it’s vitriol and also thinking about privacy that a likely future would be people using a stable of custom AI assistants as a prophylactic for the web. Have them participate in conversations, maybe even talk amongst themselves. Give them particular interests or focus.
You could for instance build one for your children. It could grow with them and they could fork it as needed. Then when old enough or whatever go hit the straight uncut gnarly internet.
This could also lead to a less restrictive natural user interface. It would be a huge boon for accessibility.
Seems inevitable to me. I’ve been building things that fall in line with this idea for a couple years now and it didn’t really dawn on me that that was the case.
I’ve got a Max Headroom VRM puppet, some text to speech, and a GPU accelerated kubeflow stack. As that progresses I’ll probably be blogging about it in a month. Or maybe he will. Who knows.
How long before a real human involvement will be an exception rather than norm?
Depends on the site I think. Some sites I believe are heavily infiltrated by bots. Facebook, 4chan, Tweeter and dating sites to name a few. Bots and fake profiles that is. 4chan already had quite a bit of 4chan-GPT (GPT3) interaction even before ChatGPT was being hyped. I don't have the links handy but people were interacting quite heavily with the bot. Tweeter is removing their free API access so maybe that will change the bot-crowd to only those with a vested financial interest in botting the user-base thus slightly raising the bar.
I doubt it. Bot responses cost money, and it's not insignificant enough to brush off the abuse cases. These companies will be optimizing their systems to provide as many bot-to-human conversations as possible. Anything else will be seen as an illogical, pointless waste of capital.
You could for instance build one for your children. It could grow with them and they could fork it as needed. Then when old enough or whatever go hit the straight uncut gnarly internet.
This could also lead to a less restrictive natural user interface. It would be a huge boon for accessibility.
Seems inevitable to me. I’ve been building things that fall in line with this idea for a couple years now and it didn’t really dawn on me that that was the case.
I’ve got a Max Headroom VRM puppet, some text to speech, and a GPU accelerated kubeflow stack. As that progresses I’ll probably be blogging about it in a month. Or maybe he will. Who knows.