Hacker News new | past | comments | ask | show | jobs | submit login

With more and better models released, getting access to OpenAI's API is becoming less interesting.

What is interesting is having access to GPUs (cloud or real hardware). The world is divided in "has access to enough GPU power/don't has access to enough GPU power".

My prediction is that this will get worse for a while.




"Better"?

Get out of town.


Would you care to explain what you mean?

I meant 40B Falcon is better than 65B LLaMA in many benchmarks and we will see even better models released with time.

What is wrong with that?


That is fair, your post left it a bit ambiguous if you meant better in reference to GPT-4 or not.

Competitors aren't even at GPT 3.5.


When I use ChatGPT, I find anything but GPT4 unusable; when I'm programming against the API, I actually tend to find myself using GPT3.5. I still haven't had a chance to experiment with the opensource LLMs, but that's my project for next month.


I have preferred GPT3.5. GPT4 is slow and verbose, I only resort to this model when all my prompts in 3.5 didn't give the expected result.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: