The study isn't trying to do replication, but seems to have tested the rate that GPT-4 predicts human responses to survey studies. After reading the study, the writers really were not clear on how they were feeding the studies they were attempting to predict the responses to into the LLM. The data they used for training also was not clear, as they only dedicated a few lines referring to this. For 18 pages, there was barely any detail on the methods employed. I also don't believe the use of the word "replication" makes any sense here.