Hacker News new | past | comments | ask | show | jobs | submit login

> And that's similar enough to the basic mental loop of a "trained" improv actor

My point is that this is only superficially similar, and to claim deeper similarity is the anthropomorphic fallacy.

To form a useful mental model, the mechanics need to be similar enough to inform the person evaluating the system as the underlying system evolves.

As LLMs evolve, the improv analogy becomes less and less useful because the LLM gets more and more accurate while the person doing improv is still just doing improv. The major difference being that people doing improv aren’t trying to be oracles of information. Perfecting the craft of improv may have nothing to do with accuracy, only believability or humor.

More generally, thinking of LLMs as analogous to intelligent humans introduces other misconceptions, leads people to over-estimate accuracy, etc.

The oddity at the bottom of all of this is that eventually, LLMs may be considered accurate enough to be used reliably as sources of information despite this “improvisation”, at which point it’s not really fair to call it improv, which would massively undersell the utility of the tool. But a perfectly accurate LLM may still not match any of the primary characteristics of an intelligent human, and it won’t matter as long as it’s useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: