It's a pretty bad look. When I think of social tech I think of the possibility to magnify the voice/reach of individuals. When I read an article like this all I see is that magnification going solely to big spenders. How do I form an independent political ideology if I'm only being told the part of the story that heavily monied interests want me to hear? Would I ever experience an anti-monied-interest being put on a level playing field with the monied interest?
Imagine a social platform with a nice API, no moderators, no global filters, and you're looking at a thread focused on a particular political issue.
Now imagine there is a chatbot that can enumerate every position a person could possibly take about this issue and generates a couple hundred thousand slightly unique strings of words that express each of these positions, and floods the thread with these "comments".
Lets say the volume of content produced by the chatbot is so high that if a user were to randomly browse comments in this thread, there is no statistically significant bias in favor of a particular position.
Now the question is how can you enable the users to find "truth", or learn anything, or even meaningfully communicate with other users within this context?
>Now imagine there is a chatbot that can enumerate every position a person could possibly take about this issue and generates a couple hundred thousand slightly unique strings of words that express each of these positions, and floods the thread with these "comments".
If there are hundreds of thousands of strings then I absolutely can't. But I'd say on most issues there are probably max a few hundred positions. If I have to crawl through the same positions restated different ways over and over then truth is lost, if I can quickly browse one unique string for each position then it will take dedication but I can suss out what I believe and have some sort of backing for it.
Dealing with information overload is sort of the unique problem of our times I suppose.
> Now the question is how can you enable the users to find "truth", or learn anything, or even meaningfully communicate with other users within this context?
It was a bad idea to think that you could do that generally before the chatbot got there.
If the cost to the chatbot is close enough to $0, or if its a state-level chatbot attack, then you're meaningfully communicating either because a) nobody who owns chatbots has an interest in disrupting you atm, or b) sheer luck.