Hacker News new | past | comments | ask | show | jobs | submit login

That doesn't seem straightforward - although it's blind to letters because all it sees are tokens, it doesn't have much training data ABOUT tokens.



What parent is saying is that instead of asking the LLM to play a game of Wordle with tokens like TIME,LIME we ask it to play with tokens like T,I,M,E,L. This is easy to do.


And if you tell it to think up a word that has an E in position 3 and an L that's somewhere in the word but not in position 2, it's not going to be any better at that if you tell it to answer one letter at a time.


The idea is, instead of five-letter-words, play the game with five-token-words.


That was my original interpretation, and while all it sees are tokens, roughly none of its training data is metadata about tokenizing. It knows far less about the positions of tokens in words than it does about the positions of letters in words.


I’m not sure that training data about that would be required. Shouldn’t the model be able to recognize that `["re", "cogn", "ize"]` represents the same sequence of tokens as `recognize`, assuming those are tokens in the model?

More generally, would you say that LLMs are generally unable to reason about sequences of items (not necessarily tokens) and compare them to some definition of “valid” sequences that would arise from the training corpus?


No. In the model, tokens are random numbers. But if you consider a sentence to be a sequence of words, you can say that LLMs are quite competent about reasoning about those sequences.


ChatGPT is able to spell the word "recognize" when asked.

So it is able to take a sequence of tokens ["recogn", "ize"] and transform it into a sequence of tokens [" R", " E", " C", " O", " G", " N", " I", " Z", " E"]




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: