In most of the conversations with GPT-3 that you see online what people do is that they take the output from the first prompt and include it in the next prompt. That is how GPT-3 can keep the thread of the conversation going without changing subject constantly. This is also why those conversations are relatively short, because as you can imagine, if you keep feeding the output to the language model as input, you run out of input size very quickly.
So, yes, that's the done thing already.