Hacker News new | past | comments | ask | show | jobs | submit login

How much additional calculation occurs in high-stakes decisions by individuals. Also what is the variability in quality of high stakes decisions in humans?

I'm guessing LLM decision is rather average, but that the LLM has no easy way of spending the extra time to gather information around said high stakes decisions like a human would.




I dont think additional calculation is the difference. It makes more sense to think of individual humans as models which are highly tuned.

Just like like LLMs, some humans are better tuned than others for specific tasks, as well as in general.


The difference is that you can reject a low-stakes answer that's invalid. You can tell that something is off, or it doesn't matter.

With high-stakes decisions, you're surrendering the decision-making power to the AI because you don't understand the output well enough to verify it.

Basically, and AI can give ideas but not advise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: