I have been seeing these AI summaries in my SERPs for many months now. I am surprised that with so much testing they are still pretty unreliable. They used to include code snippets that almost universally didn't work properly.
Google, it only takes one or two blunders for people to stop trusting your AI. Why risk it by releasing a half-baked product?
Because a) there's pressure on them to do something very much in this vein, but b) there's no fully baked product on the horizon.
Like...this is what it is. This is the output of LLMs without, at the very least, massive amounts of extra man-hours spent fine-tuning the results in various ways. There's no way for it to judge the truth value of the statements it spits out besides determining how likely they are to be produced after what came before them.
So their choices were either "release nothing, at least not for years more" or "release something half-baked".
“If we don’t shoot ourselves in the face, then someone else with shoot themselves in the face, and then we’ll be behind in the crucial metric of number of bullets in our head!”
We've never seen a disruptive technology with this much media buzz behind it so quickly, and such rapid technological advances. Disruptive technologies are never "ready for prime time" ... until they are. And large organizations are more like psychopaths (they assume harm to others is part of the cost of doing business).
Funnily enough, ground birds like chickens and quail do need to eat small rocks or sand. That helps grind up food in the gizzard. I had quails, and the hens ate pieces of oyster shell. That also gave them enough calcium to lay an egg almost every single day.
I think it's not actually in the training data. The llm is just using RAG, meaning it gets the top search results for that query and based on that generates a text. Kinda like perplexity but apparently worse then it.
Google, it only takes one or two blunders for people to stop trusting your AI. Why risk it by releasing a half-baked product?