Would love an answer on this too. It would be even better not just to try using this, but also be able to run it locally, something that has been impossible for GPT-3.
I mean, theoretically if you can get the model weights onto disk then you should be able to do the computation - but it might takes days or months on commodity hardware. It would also require creating a system that can do this and I doubt there is much demand.