Hacker News new | past | comments | ask | show | jobs | submit login

There are several datasets of programming puzzles, of increasing difficulty.

Basically, you write a test, let the algorithm find the program that passes it. Hopefully, at one point you reach GPT-3 level performances where it is able to imagine programs for tests it never saw.




We can discuss that scenario when it happens. Currently we aren't there and it isn't obvious to me that the current algorithm can get there.

> Hopefully, at one point you reach GPT-3 level performances where it is able to imagine programs for tests it never saw.

You mean nonsense programs just like GPT-3 generates nonsense articles? GPT-3 doesn't remember the logic in its sentences, and in order to solve programming competition problems you need to translate logic from human text into code.

I agree that it might be possible to get something useful this way, but until it actually works I'll doubt it will work. There is just way too much coherence required that doesn't seem to be there yet, and from what I've seen the coherence problem gets exponentially worse as you get larger problems.


GPT-3 stays incredibly in-topic. You can give it logic problems it won't be able to solve, but GPT-3 is capable of things that, as a GOFAI supporter, I never thought brute-force connexionist approach could do. I am now very cautious before saying a task can't be done by DL because it requires "understanding".

These approaches discover concepts and the relationship between them, and use that in their tasks. It is not far-fetched to say that there is some kind of understanding there.

For now we have trained it to generate fake text and basically made a master bullshitter, but I have no doubt that it can easily extract meaning and intent from text.


>I have no doubt that it can easily extract meaning and intent from text.

Have you met humans? People do not supply complete or self consistent information on what their goals are. They do not form objections to the output of a program based on an accurate and complete model of it either.

Also, they hate to communicate via text - how many times have you heard "ugh, let's discuss it on a call"?

But that does not mean you can BS them endlessly. The fact that people have no idea about the technical details doesn't mean they are going to accept failure.

I'd like to see an AI that can dominate https://en.wikipedia.org/wiki/Nomic




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: