Hacker News new | past | comments | ask | show | jobs | submit login

> A question, when GPT-4 contradicts in explanation, how much of them were in fact correct?

It was mostly when a card is good in a vacuum but not as good in a specific set. WOE (which this was trained on) skewed pretty aggressive, so GPT-4 was tended to overvalue strong expensive cards (compared to what good players thought at least).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: