Hacker News new | past | comments | ask | show | jobs | submit login

Tough question (for me). Assuming the model is producing its own queries, am I wrong to wonder how it's fundamentally different from human reasoning?



It could just be programmed to follow up by querying itself with a prompt like "Come up with arguments that refute what you just wrote; if they seem compelling, try a different line of reasoning, otherwise continue with what you were doing." Different such self-administered prompts along the way could guide it through what seems like reasoning, but would really be just a facsimile thereof.


Maybe the model doesn't do multiple queries but just one long query guided by thought tokens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: