Hacker News new | past | comments | ask | show | jobs | submit login

Is this possible to fine tune llama-2 locally on M1 Ultra 64GB, I would like to know or any pointer would be good. Most of them are on Cloud or using Nvidia Cuda on linux.



I don't think so. I have M1 Max 64GB and it works okay for some inference. I'm buying a few credits from RunPod. It will be a few 10's of dollars to get it trained.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: