The claim is that they were being hyperbolic in an effort to generate hype for their product. You claimed 'people have bad memories' and they never made such claims. Now you are stating 'okay they made such claims, but...' So far as I can tell of your opinion - if they made such claims OpenAI wins, if they didn't make such claims OpenAI wins. Gee, I wonder what your opinion is.
None of the claims in the paper are hyperbolic, they happened.
An experiment to find something out isn't hyperbolic even when the result is "hahah no". A requirement for the concept of a test is more than one possible answer.
Paying attention to potential risks before you have had a chance to evaluate them, is exactly what people demand whenever a group fails to do so and finds out there was a risk by causing harm.
Or have you never noticed that? "Why didn't the government prevent this attack!" and "Why didn't Facebook realise their software was enabling a genocide!" etc.
Perhaps I was being overly generous by blaming this on memory rather than on reading worse than the very LLM being laughed at.