Hacker News new | past | comments | ask | show | jobs | submit login

> You can’t actually trust ai systems

For a lot of (very profitable) use cases, hallucinations and 80/20 are actually more than good enough. Especially when they are replacing solutions that are even worse.




What use cases? This kind of thing is stated all the time, never any examples.


Any use case where you treat the output like the work of a junior person and check it. Coding, law, writing. Pretty much anywhere that you can replace a junior employee with an LLM.

Google or Meta (don't remember which) just put out a report about how many human-hours they saved last year using transformers for coding.


All the usecases we see. Take a look at perplexity optimising short internet research. If I get this mostly right its fine enough, saved my 30 minutes of mindless clicking and reading - even if some errors are there.


You make it sound like LLMs just make a few small mistakes when in reality they can hallucinate on a large scale.


What are examples of these (very profitable) use cases?

Producing spam has some margin on it, but is it really very profitable? And else?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: