Hacker News new | past | comments | ask | show | jobs | submit login

I think simply having the vocab more code friendly (e.g. codex) would make the biggest difference, whitespace is the biggest one (afaik every space is a token), but consider how many languages continue `for(int i=0;`, `) {\n`, `} else {`, 'import ', etc.

My understanding is that a model properly trained on multiple languages will beat an expert based system. I feel like programming languages overlap, and interop with each other enough that I wouldn't want to specialize it in just one language.




There's also just far more tokens to train on if you do multi-language. I'd guess only the most popular languages would even have enough training data to get a specialized version - but it would still be an interesting trade off for certain use cases. Being able to run a local code assistant on a typescript-only project for example, with a 32k context window would really come in handy for a lot of people. I don't know enough to understand the impact of vocab size vs context size.


Its worth noting that from what I can tell - A model well trained in most languages would be able to learn the niche ones much more easily.

The vocab size of llama2 is 32,000. I guess I personally don't think that there's enough difference in programming languages to actually save any meaningful number of tokens considering the magnitude of the current vocab.


I wonder if you could train a model generally across a lot of languages, then specialize for a specific one with a different tokenizer / limited vocabulary? Here's the reference I've been using for llama 2 tokens:

https://huggingface.co/mlc-ai/mlc-chat-Llama-2-7b-chat-hf-q4...

it looks like if you just limit it to English it'd cut the count almost by half - further limiting the vocab to a specific programming language could cut it down even more. Pure armchair theory-crafting on my part, no idea if limiting vocab is even a reasonable way to improve context handling. But it's an interesting idea - build on a base then specialize as needed and let the user swap out the LLM on an as-needed bases (or the front-end tool could simply detect the language of the project). 3B or smaller models with very long context which excel at one specific thing could be really useful (e.g. local code completer for English typescript projects)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: