Good point, coming up with a novel joke is no joke. There's a genuine problem where GPT is to a first approximation going to have seen everything we'll think of to test it, in some form or other.
Of course, if we can't come up with something sufficiently novel to challenge it with, that also says something about the expected difficulty of its deployment. :-P
I guess once we find a more sample-efficient way to train transformers, it'll become easier to create a dataset where some entire genre of joke will be excluded.
Of course, if we can't come up with something sufficiently novel to challenge it with, that also says something about the expected difficulty of its deployment. :-P
I guess once we find a more sample-efficient way to train transformers, it'll become easier to create a dataset where some entire genre of joke will be excluded.