Hacker News new | past | comments | ask | show | jobs | submit login

Except this is not exactly what LLMs do. This is quickly crossing into an anthropomorphism fallacy.

The thing that matters most is how accurate (or not) the LLM is, regardless of how it arrived at its output.

A human doing improv bears some surface level resemblance to an LLM processing text, but neither the context nor the mechanics are actually similar, nor are the theoretical limits of computing power.

The broader point is that this mental model doesn’t help us better understand LLMs, and as LLMs continue to improve, any resemblance to “improv” will be less and less relevant (not to mention inaccurate).




I don't mean to anthropomorphize it, but I am trying to drive the point that current LLMs are text completion models. The current architectures continue a script, one token at a time... that's what it does, even if its a trillion trillion parameters and heavily aligned on all all accurate human communication ever.

And that's similar enough to the basic mental loop of a "trained" improv actor to serve as a metaphor, I think.


> And that's similar enough to the basic mental loop of a "trained" improv actor

My point is that this is only superficially similar, and to claim deeper similarity is the anthropomorphic fallacy.

To form a useful mental model, the mechanics need to be similar enough to inform the person evaluating the system as the underlying system evolves.

As LLMs evolve, the improv analogy becomes less and less useful because the LLM gets more and more accurate while the person doing improv is still just doing improv. The major difference being that people doing improv aren’t trying to be oracles of information. Perfecting the craft of improv may have nothing to do with accuracy, only believability or humor.

More generally, thinking of LLMs as analogous to intelligent humans introduces other misconceptions, leads people to over-estimate accuracy, etc.

The oddity at the bottom of all of this is that eventually, LLMs may be considered accurate enough to be used reliably as sources of information despite this “improvisation”, at which point it’s not really fair to call it improv, which would massively undersell the utility of the tool. But a perfectly accurate LLM may still not match any of the primary characteristics of an intelligent human, and it won’t matter as long as it’s useful.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: