Hacker News new | past | comments | ask | show | jobs | submit login

I would personally expect that what extra tokens you give it should have a meaning and giving more detailed description should help.

But the fact that simply adding extra time to think improves quality of answer is interesting on its own.

I might test later if asking it to count to 100 before giving an answer also improves the quality.




You have not demonstrated the fact that adding extra time to think improves the quality of its answer. That is your pet hypothesis, and you think you've proved it with a n=3 trial.

I think you're trying to apply a simple rule of thumb - the idea that longer context is effective because it lets the LLM think more - to situations where we'll see the opposite effect.

For example, if you ask it to count to 100 and then solve a math benchmark, my intuitive sense is that it'll be much worse, because you're occupying the context with noise irrelevant to the final task. In cheaper models, it might even fail to finish the count.

But I would love to be proven wrong!

If you're really interested about this, let's design a real experiment with 50 or so problems, and see the effect of context padding across a thousand or so answer attempts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: