Hacker News new | past | comments | ask | show | jobs | submit login

They said 7b llama which I read as the base LLaMa model, not this one specifically. All of these LLMs are trained on Stack Overflow so it makes sense that they’d be good out of the box.



The top level comment is specifically citing performance of code llama against codex.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: