Hacker News new | past | comments | ask | show | jobs | submit login

> I was under the assumption that finetuneing LLMs was useful only when you need to change the model's tone (speak like a pirate, voldemort etc).

A lot of why I tried this out was to test the limits of this belief, you see a lot of talk like this out there and it sounded like nonsense to me.

Finetuning is fundamentally not much different than continued pretraining; if you feed the model high-quality and high-volume data I think it's reasonable to expect it to acquire new skills




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: