What does that mean for LLMs? The Internet has lots of poorly written text. If LLMs can't distinguish nuance, ambiguity, or lack of clarity then what exactly are they generating and why would their output be useful?
Taking a poorly written sentence, interpreting it as meaning something incorrect, and then presenting it with authoritative, confident language is very close to gas lighting.
I guess this is going to be a fun game for AI. Not only does it have to contend with false information vs true informatino, it also has to figure out correct information that might be written in an ambiguous way.
Out of all Dutch cheese consumed internationally, 50-60% is Gouda. Or, in other words, 50-60% of Dutch cheese exports are Gouda.
At least that’s how I read it.