Not really, the LLaMA model is only available on request and access is granted on a "case by case basis" [1], which for most of us is more or less as available as GPT-3 is.
I was mostly talking about access to the trained model weights. The OpenAI API is certainly better than nothing, but it is very restrictive and cost prohibitive for many purposes. For instance, you have to adhere to the OpenAI usage policies, and while they offer fine-tuning services, it is not likely enough to implement techniques like RLHF, which is the basis for ChatGPT.
That said, if LLaMa can achieve performance competitive with GPT-3 with just 13B parameters, I imagine that it is only a matter of time until open source pre-trained models based on this architecture become available, which would render GPT-3 obsolete.
[1] https://ai.facebook.com/blog/large-language-model-llama-meta...