For a lot of (very profitable) use cases, hallucinations and 80/20 are actually more than good enough. Especially when they are replacing solutions that are even worse.
Any use case where you treat the output like the work of a junior person and check it. Coding, law, writing. Pretty much anywhere that you can replace a junior employee with an LLM.
Google or Meta (don't remember which) just put out a report about how many human-hours they saved last year using transformers for coding.
All the usecases we see. Take a look at perplexity optimising short internet research. If I get this mostly right its fine enough, saved my 30 minutes of mindless clicking and reading - even if some errors are there.
For a lot of (very profitable) use cases, hallucinations and 80/20 are actually more than good enough. Especially when they are replacing solutions that are even worse.