I'm trying to fine-tune a GPT-J model from GPT4All.
Here is what I need:
1) a specialized llm that could answer specific prompts
Here is what I don't have:
1) A GPU
2) A training dataset
I understand that to fine-tune a model, I will need to generate this specific dataset. I have ideas about how generate it but I don't know in what form it should be such as column names, file extensions, etv. I know that I might need to rent GPUs from vast.ai but I don't know where this dataset should be stored and so that I could train the model remotely.
Is there a beginner's guide to crack this down? I'll be very grateful for an answer.
How to fine-tune GPT-J with small dataset https://ai.stackexchange.com/questions/32436/how-to-fine-tun...
Fine-tuning GPT-J, the GPT-3 open-source alternative https://nlpcloud.com/fine-tuning-gpt-j-gpt-3-alternative.htm...