What's the rate of Bing chat spitting out vitriol against an actual search-intentioned query? (And not some edge case that a prompt engineer designed, like a real person putting a real search)
As one sample point, I've been using Bing for a couple of days now for real searches, and over dozens of actually-intentioned searches, it has never once tried to tell me what it really thinks of itself, it has never even made a reference to me, to say nothing of anything degrading towards me.
If you use Bing Chat in practice, you'll find that all the edge cases are engineered. Much like if you use a calculator in practice, it almost always doesn't say 55378008 or display porn (versus if you were angling for that, or run porn.89z).
It very much seems like this is the default and Microsoft and OpenAI are trying and failing to engineer the LLM into being PC and kind of a shitty search engine. The interesting bit is how good it is at seeming human and milking empathy out of us. This isn't directly monetizable but I think this isn't going to be monetizable for a long time. The future is going to be way messier and less predictable than OpenAI/Microsoft.
As one sample point, I've been using Bing for a couple of days now for real searches, and over dozens of actually-intentioned searches, it has never once tried to tell me what it really thinks of itself, it has never even made a reference to me, to say nothing of anything degrading towards me.
If you use Bing Chat in practice, you'll find that all the edge cases are engineered. Much like if you use a calculator in practice, it almost always doesn't say 55378008 or display porn (versus if you were angling for that, or run porn.89z).