Hey man, thanks for the article. I like it that it is concise and simple. One thing that's not clear to me is about the inference stage: where does this inference process runs? Do you need to run it in a GPU powered instance or could it be runned in a consumer laptop?
Has anyone run this on a Mac with M1 chip with GPU acceleration enabled?
I did my first hello world fine-tuning of Llama2 today, using Google Colab and various pieces of code. So no resume from checkpoint, I had to specify the number of epochs manually and no tracking of the loss and I didn't get the lora weights extracted, with llama2 7B I ended up with a 13gb gguf file, that I have successfully tried out locally with llama.cpp and its webui.
The axolotl seems like a nice project that hopefully makes the fine tuning easier.