Hacker News new | past | comments | ask | show | jobs | submit login

> While reasoning tokens are not visible via the API, they still occupy space in the model's context window and are billed as output tokens.

https://platform.openai.com/docs/guides/reasoning

So yeah, it is in fact very bad product design. I hope Llama catches up in a couple of months.




Most likely the model has similar size compared to the original gpt4, which also has similar price.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: