Hacker News new | past | comments | ask | show | jobs | submit login

It says right up front that it recognized the problem formulation from its training set. You need to change the context and formulation enough that it’s no longer able to parrot back the “classic solution” and actually has to do its own logical inference.



I am very skeptic on LLM in general (check my post history) but look:

https://chat.openai.com/c/7070efe7-3aa1-4ccc-a0fc-8753d34b05...

I doubt this formulation existed before -- I came up with it myself right now.


"Unable to load conversation".





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: