Hacker News new | past | comments | ask | show | jobs | submit login

The new GPT-4 model has a context length of 120k. For consumers this equates to slightly more than $1/message input-only.

If ChatGPT is using this model then it's more reasonable to assume that they are bleeding money and need to cut costs.

People really need to stop asking ChatGPT to write out complete programs in a single prompt.




Interesting, how is writing less code cutting costs for them? Does this get back to the rumor that the board was mad at Altman for prioritizing chatgpt over money going into research/model training?


Code is very token dense, from what I understand.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: