Hacker News new | past | comments | ask | show | jobs | submit login

I disagree with this comment, and anyone reading it should take it with a big grain of salt. Let's go back to 2016 and replace "LLM" with "Reinforcement Learning". Everyone thought every problem could be solve by RL because it's a looser restriction on the problem space. But then RL failed to deliver real world benefits beyond some very specific circumstances (well defined games), and supervised learning is/was still king for 99% of problems.

Yes, LLMs are amazing but they won't be winning every single Kaggle competition, displacing every other ML algorithms in every setting.




Sure enough LLMs are not going to win every kaggle competition. But... I am fairly certain that transformers may. Embed all categorical values, scale continuous features by embeddings, and run it through a graph neural network, with high probability it will beat nearly everything.


Transformers require a lot of data to converge. There’s a reason tree models are still king of kaggle even though transformers have been around for 5 years now.


To me, the vast majority of people in the field seem to hold on to the idea that different technology suits different problems.

Also, are you aware that one of the most prominent AI tools of this month (ChatGPT) was obtained with RL!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: