Hacker News new | past | comments | ask | show | jobs | submit login

It's even easier than that. There's no need to even fine tune an LLMs to do it. Here's a screenshot[1] of a 4 bit quantised version of an off the shelf open LLM (WizardLM 13B v1.2) doing it on my Mac.

[1]: https://imgur.com/a/S9jnHWJ




Yep, I use Llama2 70b for larger tasks on my MacBook and 13b for more “single use” type tasks. It’s a game changer.


That may be true, and for some tasks the accuracy may be high enough. I have gotten much more consistency in my tasks by fine tuning though.

Getting a consistently good result for one shape of input may not indicate that same performance for another shape of input for example.


The system confabulated the www subdomain of the “URL provided in the text”, right?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: