I the analogy that GPT is improvising/speaking, like you would in a normal conversation. When I talk out loud (quickly), I only have ~1 word of lookahead just like GPT. But if I want good answers to hard questions, I need to slow down and actually write[1] something down. So the wrapper script that iterates/recursive on responses that OP describes is analogous to the slowing down process of writing.
I'm also curious what sort of results the iterative process can lead to. The movie script example in OP is impressive, but does it reach a stable state? Does it work for other types of prompts (coding related/other) that i've seen on twitter? All very interesting.
I wrote this post because most people are misinterpreting ChatGPT as a complete application. OpenAI is not doing themselves any favors with their marketing, either.
I discuss why GPT-like systems have the failure modes they do and how to mitigate those failure modes to create useful applications.
While GPT-like systems can be complete applications for a narrow set of use-cases, they are better thought of as a core foundational unit within a larger system, like a CPU is only a core foundational unit of an end-user device. This post describes how to build a rudimentary version of such a system.
The system described is a general purpose AI agent that uses a GPT-like system as a sort of "intuitive mode thinking", and layers on more complex modes of thought required of a useful agent.
I'm also curious what sort of results the iterative process can lead to. The movie script example in OP is impressive, but does it reach a stable state? Does it work for other types of prompts (coding related/other) that i've seen on twitter? All very interesting.
[1] http://www.paulgraham.com/words.html