What's the point of having AI-generated answers on Stackoverflow when I can just ask the AI to begin with ? There is no added value here, especially given that the AI is trained on SO data to begin with.
The deliberations / clarifications / back-and-forth that used to happen on stackoverflow are now happening with LLMs. That data is now lost forever and is now proprietary, siloed.
Not sure about others, but if I notice it's an AI answer, I'll likely stop reading, as such an answer has no added value to me - as others have pointed out, I can simply get that answer myself, and I likely have before venturing to SO.
And if I don't read through an answer, I'll also not upvote it.
Take it the other way. What is the point of a human q/a site when you can get good enough answers from an AI? It's their business model that is threatened, and they need to pivot in order to survive.
It gets harder though. Even before AI, a lot of the low hanging fruit “how do I sleep for 2 seconds in JavaScript” are already answered. Beginner Qs are often marked as duplicate. So you’re left with long tail difficult or obscure questions that take a lot more effort to answer.
The thing is, you currently can't. Sure an AI can answer relatively complex technical questions, but sometimes doesn't do it properly. And when it fails, you don't know. I had cases where AI was clearly inventing calls to APIs that didn't actually exist although they were legitimately looking; such content would just be downvoted enough on SE that you'd be very careful before even considering it. If I ask an AI, I might either get a good answer or a bad one, but without a vote count nearby telling me how good it might actually be.
I've tried many times to get an answer from these AI engines. They have always been just "good enough", to help me find a direction for more research to get a good answer, or flat out wrong.
The paid version of ChatGPT has had built in web search for over two years. It’s easy enough to just say “verify your answer” and it will search the web.
Even with code - albeit most of my experience is with having it write AWS automation scripts in Python using the AWS SDK (Boto3), I’ll either give it the link to the API in question or tell it to “verify Boto3 calls on the web”.
The worse thing it usually does is have code where it wants you to include your credentials in code (don’t do that). But even most people on SO do the same
It's a weird attempt to keep users on the site. Plus those AI generated pages will show up on google, when otherwise there might be nothing or the pages are being downranked due to being old.
Google promotes AI slop above everything else, so may as well get in on the grift before it's too late.
I think it might also be an attempt to extract some final value from the users by having them train AI. Quality RLHF data is expensive, and if you can get experts to do it for free...