Hacker News new | past | comments | ask | show | jobs | submit login

Not really.

There's no fundamental difference between input and output tokens technically.

The internal model space is exactly the same after evaluating some given set of token, no matter which of them were produced by the prompter or the model.

The 16k output token limit is just an arbitrary limit in the chatgpt interface.




> The 16k output token limit is just an arbitrary limit in the chatgpt interface.

It is a hard limit in the API too, although frankly I have never seen an API output go over 700 tokens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: