Hacker News new | past | comments | ask | show | jobs | submit login

Accompanying working paper that demonstrates 85% accuracy of GPT-4 in replicating 70 social science experiment results: https://docsend.com/view/qeeccuggec56k9hd



Do you even get 85% replication rate with humans in social science? Doesn't seem right.

But at least it can give them hints of where to look, but going that way is very dangerous as it gives LLM operators power to shape social science.


The study isn't trying to do replication, but seems to have tested the rate that GPT-4 predicts human responses to survey studies. After reading the study, the writers really were not clear on how they were feeding the studies they were attempting to predict the responses to into the LLM. The data they used for training also was not clear, as they only dedicated a few lines referring to this. For 18 pages, there was barely any detail on the methods employed. I also don't believe the use of the word "replication" makes any sense here.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: