Hacker News new | past | comments | ask | show | jobs | submit login

That's how we all want it to work, but the reality today is that GPT-4 is better at almost anything than a fine-tuned version of any other model.

It's somewhat rare to have a task and good enough dataset that you can finetune something else to be close enough in quality to GPT-4 for your task.




GPT-4 is still heavily censored and will simply refuse to talk about many "problematic" things. How is that better than a completely uncensored model?


Depends what you’re using it for. For many use cases, the censorship is irrelevant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: