Hacker News new | past | comments | ask | show | jobs | submit login

This paper introduces the Bayesian program learning (BPL) framework, capable of learning a large class of visual concepts from just a single example and generalizing in ways that are mostly indistinguishable from people.

This is one of the most exciting and readable papers I've come across.

Does anyone know if the code is available anywhere? Can we reproduce their results? I can think of a dozen applications for such an ability.




The link to the code is in the paper as well. After reading the media hype around the paper I thought I should read the paper, and like you, was surprised to find a very readable paper. Although I think to really understand the mechanism I am going to have to read the code, because without that this just looks like a good parlor trick (because the permutation in stroke output is kind of a fun but minor piece of code, yet a big part of the media breathlessness.)

There should be a flurry of activity as practitioners take these concepts and start applying them to other fields, such as static code analysis. Much of the magic seems to be in the choice of atoms that you feed in to the algorithm.


The take home here is actually that by modelling the physical process of writing you get a more accurate model. It requires fewer examples partly because of pre-training, and partly because of physics hard coded into the model structure. It's not just the atoms that you feed in, but the entire algorithm is designed around drawing glyphs.

It's not entirely clear how you'd apply these concepts to new problems. Certainly in many cases you could come up with more detailed models of the processes involved. But in others, like text understanding, it's not at all clear how you'd make models more sophisticated.


Sorry I should have included the link in my post since I searched and found it.

https://github.com/brendenlake/BPL.git





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: