With more and better models released, getting access to OpenAI's API is becoming less interesting.
What is interesting is having access to GPUs (cloud or real hardware). The world is divided in "has access to enough GPU power/don't has access to enough GPU power".
My prediction is that this will get worse for a while.
When I use ChatGPT, I find anything but GPT4 unusable; when I'm programming against the API, I actually tend to find myself using GPT3.5. I still haven't had a chance to experiment with the opensource LLMs, but that's my project for next month.
What is interesting is having access to GPUs (cloud or real hardware). The world is divided in "has access to enough GPU power/don't has access to enough GPU power".
My prediction is that this will get worse for a while.