I was mostly talking about access to the trained model weights. The OpenAI API is certainly better than nothing, but it is very restrictive and cost prohibitive for many purposes. For instance, you have to adhere to the OpenAI usage policies, and while they offer fine-tuning services, it is not likely enough to implement techniques like RLHF, which is the basis for ChatGPT.
That said, if LLaMa can achieve performance competitive with GPT-3 with just 13B parameters, I imagine that it is only a matter of time until open source pre-trained models based on this architecture become available, which would render GPT-3 obsolete.
what? GPT3 is available by a public api that anyone can sign up and pay for and use for commercial use. how is it "as available"?