Hacker News new | past | comments | ask | show | jobs | submit login

https://platform.openai.com/docs/guides/reasoning/advice-on-... this explains the bug. o1 can't see its past thinking. This would seem to limit the expressivity of the chain of thought. Maybe within one step it's UTM, but with the loss of memory, extra steps will be needed to make sure the right information is passed forward. The model is likely to start to forget key ideas which it had that it didn't write down in the output. This will tend to make it drift and start to focus more on its final statements and less (or not at all) on some of the things which led it to them.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: