Hacker News new | past | comments | ask | show | jobs | submit login

> that is AI above and beyond what many humans can do, which is "awesome" no matter how you put it.

That's not the point being made. The point OP is making is that it is not possible to understand how impressive at "generalizing" to uncertainty a model is if you don't know how different the training set is from the test set. If they are extremely similar to each other, then the model generalizes weakly (this is also why the world's smartest chess bot needs to play a million games to beat the average grandmaster, who has played less than 10,000 games in her lifetime). Weak generalization vs strong generalization.

Perhaps all such published results should contain info about this "difference" so it becomes easier to judge the model's true learning capabilities.




I guess weaker generalisation is why it's better though. It converges slower but in the end it knowledge is more subtle. So my bet is more compute and programing and math is "solved" - not in research sense but very helpful "copilot".

The real fun will begin once someone discovers how to make any problem differentiable so try/error method isn't needed. I suggest watching recent Yann Le Cun interview. This will solve researching as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: