Hacker News new | past | comments | ask | show | jobs | submit login

It would be interesting to see the comparison between Prolog and a similar purposes LLM



> Searchformer [...] solves previously unseen Sokoban puzzles 93.7% of the time, while using up to 26.8% fewer search steps than standard A∗ search ... [and] significantly outperforms baselines that predict the optimal plan directly with a 5-10× smaller model size and a 10× smaller training dataset.

So this can be interpreted as a generic statement to say there might be progress during learning iterations (compared to its own arbitrary first iteration baseline). I think it's important to not getting carried away and assume the recent immense progress in generative AI is just easily repeated for any other AI task. There's also quantum computing having been advertised for over a decade now for a break through in planning and optimization efficiency.

> We also demonstrate how Searchformer scales to larger and more complex decision making tasks like Sokoban with improved percentage of solved tasks and shortened search dynamics.

Worth mentioning that Sokoban is nowhere near a complex decision making task let alone state of the art in an academic or commercial planning/optimization context.

> A∗'s search dynamics are expressed as a token sequence outlining when task states are added and removed [...] during symbolic planning.

Go ahead and compare against eg. Quantum Prolog's readily available container planning problem and solutions [1] then to generate exactly that in the browser - a plan with symbolic actions to perform - or alternatively, a classic Graphplan or CLIPS planning competition problem.

[1]: https://quantumprolog.sgml.io/\*




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: