Hacker News new | past | comments | ask | show | jobs | submit login

The difference is that DreamCoder has a hand-crafted PCFG [1] that is used to generate programs, rather than a large language model. So the difference is in how programs are generated.

________

[1] The structure of the PCFG is hand-crafted, but the weights are trained during learning in a cycle alternating with neural net training. It's pretty cool actually, thought a bit over-engineered if you ask me.




Right, I think it’s a bit crazy not to use a grammar as part of the generation process when you have one. My guess is that constraining LLM generation with a grammar would make it way more efficient. But that’s more complicated than just throwing GPT3 at all of Github.

Also, my understanding is that Dreamcoder does some fancy PL theory stuff to factorize blocks of code with identical behavior into functions. Honestly I think that’s the key advance in the paper, more than the wake-sleep algorithm they focus on.

Anyways the point was more that self supervised learning is quite applicable to learning to program. I think the downside is that the model learns its own weird, non-idiomatic conventions, rather than copying github.


I guess you're right. The sleep-wake cycle is like a kind of roundabout and overcomplicated EM process. I've read the paper carefully but theirs is a complicated approach and I'm not sure what its contributions are exactly. I guess I should read it again.

Yes, it's possible to apply self-supervised learning to program synthesis, because it's possible to generate programs. It's possible to generate _infinite_ sets of programs. The problem is that if you make a generator with Universal Turing Machine expressivity, you're left with an intractable search over an infinite search space. And if you don't generate an infinite set of programs, then you 're left with an incomplete search over a space that may not include your target program. In the latter case you need to make sure that your generator can generate the programs you're looking for, which is possible, but it limits the approach to only generating certain kinds of programs. In the end, it's the easiest thing to create a generator for progams that you already know how to write- and no others. How useful is that is an open question. So far no artificial system has ever made an algorithmic contribution, to my knowledge, in the sense of coming up with a new algorithm for a problem for which we don't have good algorithms, or coming up with an algorithm for a problem we can't solve at all.

My perception is influenced by my studies, of course, but for me, a more promising approach than the generate-and-test approach exemplified by DreamCoder and AlphaCode etc. is Inductive Programming, which is to say, program synthesis from input-output examples only, without examples of _programs_ (the AlphaCode paper says that is an easier setting but I very disagree). Instead of generating a set of candidate programs and trying to find a program that agrees with the I/O examples, you have an inference procedure that generates _only_ the programs that agree with the I/O examples. In that case you don't need to hand-craft or learn a generator. But you do need to impose an inductive bias on the inference procedure that restricts the hypothesis language, i.e. the form of the programs that can be learned. And then you're back to worrying about infinite vs. incomplete search spaces. But there may be ways around that, ways not available to purely search-based systems.

Anyway program synthesis is a tough nut to crack and I don't think that language models can do the job, just like that. The work described in the article above, despite all the fanfare about "reasoning" and "critical thinking" is only preliminary and its results are not all that impressive. At least not yet. We shall see. After all, DeepMind has deep resources and they may yet surprise me.


Could you provide some papers/link related to Inductive Logic Programming? I'd like to look at other techniques in this space.


My pleasure!

- First, some more recent work, mostly overviews.

1. The following is the most recent overview of the field I'm aware of:

Inductive logic programming at 30 (Cropper et al, 2020)

https://www.doc.ic.ac.uk/~shm/Papers/ilp30.pdf

2. And a slightly shorter version of the same paper that summarises new trends:

Turning 30: New Ideas in Inductive Logic Programming (Cropper et al, 2020)

https://www.ijcai.org/Proceedings/2020/0673.pdf

3. Here's a short introduction to the relatively new ILP direction of learning Answer Set Programming:

Inductive Logic Programming in Answer Set Programming (Corapi et al, 2011)

https://link.springer.com/chapter/10.1007/978-3-642-31951-8_...

4. This is an overview of Meta-Interpretive Learning (MIL), a new approach to ILP that overcomes many difficulties of earlier approaches (Full disclosure: my own work is on MIL, though not the article linked):

Meta-Interpretive Learning: achievements and challenges (Stephen Muggleton, 2017)

https://www.doc.ic.ac.uk/~shm/Papers/rulemlabs.pdf

5. And this is a (short vesion) of a paper on δILP, a neural-net based ILP system:

Learning Explanatory Rules from Noisy Data (Evans and Grefenstette, 2018)

https://www.ijcai.org/Proceedings/2018/0792.pdf

- Next, some earlier work that is still relevant:

6. This is the inaugural paper of the field, that first named it (a little heavy reading though):

Inductive Logic Programming (Stephen Muggleton, 1990)

https://www.doc.ic.ac.uk/~shm/Papers/ilp.pdf

7. Here's an early paper on predicate invention, an important technique in ILP (only recently fully realised via MIL):

Predicate Invention in ILP - an Overview (Irene Stahl, 1993)

https://link.springer.com/chapter/10.1007%2F3-540-56602-3_14...

8. And an early overview of learning recursion (and performing predicate invention) that also lists several early ILP systems:

Inductive synthesis of recursive logic programs:achievements and prospects (Flener and Yilmaz, 1999)

https://core.ac.uk/download/pdf/82810434.pdf

That should be enough to get you started. I recommend reading in the order I linked to the various articles. I tried to give links to documents that I know can be read for free.

Unfortunately most of the material on ILP is either in scholarly articles, or, where there are textbooks, they tend to be older. That sounds bad, but there has been much new work recently with several new approaches.

Let me know if you're looking for more specific information. See my signature for contact details- I'm happy to answer emails about ILP :)


It should be emphasised that inductive programming is not tied to logic programming, and works for every other programming paradigm as well, e.g. functional programming [1, 2]. We could also do IP for imperative programming, although, as far as I am aware, nobody has done this.

[1] Feser et al's Lambda-Learner https://www.cs.utexas.edu/~swarat/pubs/pldi15.pdf

[2] S. Katayama's MagicHaskeller http://nautilus.cs.miyazaki-u.ac.jp/~skata/MagicHaskeller.ht...


That's absolutely true! But the OP asked about ILP in particular.

To be fair, logic and functional programming languages do have some advantages as target languages for Inductive Programming compared to imperative languages in that they have very simple syntax. For example, Prolog doesn't even have variable declarations. That's very convenient because the learning system only needs to learn the logic of the program, not the syntax of the language also. It's also much simpler to define language bias or program schemata etc constraints on the form of hypotheses in such languages, or even order programs by generality. For instance, Prolog has unification built-in and unification is used in ILP to order programs by generality (by testing for subsumption). All this machinery would have to be implemented from scratch in an imperative language.

Although the reason that logic and functional programming languages are given more weight in IP is probably for historical reasons, because Lisp and Prolog were, for a long time, "the languages of AI".

I'm trying to remember... I think there's been some IP work on imperative languages, maybe even Python. I'll need to check my notes.


> ILP to order programs by generality

Sorry, naive question: does ILP test candidate programs by increasing or decreasing generality?


Not naive at all! One common categorisation of ILP approaches is by whether they search for programs from the most to the least general (least general is more specific), or from the least to the most general. Some approaches do a little bit of both. Approaches that search from general to specific are known as "top-down" and approaches that search from specific to general are known as "bottom-up".

The "top" and "bottom" terms refer to a lattice of generality between programs, where generality is typically measured by subsumption or entailment etc. Subsumption in particular is a syntactic relation (that implies a semantic one, entailment) so "searching" a space of logic programs ordered by subsumption means in practice that the space of programs is constructed by generalising or specialising some starting program by means of syntactic transformation according to subsumption (e.g. a first order clause can be specialised by adding literals to it: P(x):- Q(x) subsumes P(x):- Q(x), R(x). The simplest intuition is to remember that by adding more conditions to a rule we make it harder to satisfy).

A more general program entails more logical atoms and ILP algorithms are typically trained on both positive and negative example atoms of a target program, so top-down approaches begin with an over-general program that entails all the positive examples and some or all of the negative examples and specialise that program until it entails only the positive examples. Bottom-up approaches start with an over-specialised program that entails none of the positive examples and generalise it until it entails all the positive examples.

The mathematics of generalisation are at the core of ILP theory and practice. It's what sets ILP apart from statistical machine learning which is based on the mathematics of optimisation.


Thank you so much!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: