Hacker News new | past | comments | ask | show | jobs | submit login

What impresses me is frankly how bad it is.

I can't claim to have tried every model out there, but most models very quickly fail when asked to do something along the lines of "describe the interaction of 3 entities." They can usually handle 2 (up to the point where they inevitably start talking in circles - often repeating entire chunks verbatim in many models), but 3 seems utterly beyond them.

LLMs might have a role in the field of "burn money to generate usually-wrong ideas that are cheap enough to check in case there's actually a good one" though.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: