Hacker News new | past | comments | ask | show | jobs | submit login

I think this is generally a good answer, but keep in mind I said AGI "in text". My forecasting is that within 3 years you will be able to give arbitrary text commands and get the textual output of the equivalents of "clean my house, take care of my kids, ..." like problems.

I also would contend that there is reasoning happening and that zero-shot demonstrates this. Specifically, reasoning about the intent of the prompt. The fact that you get this simply by building a general-purpose text model is a surprise to me.

Something I haven't seen yet is a model simulate the mind of the questioner, the way humans do, over time (minutes, days, years).

In 3 years, I'll ping you :) Already made a calendar reminder




Pattern recognition and matching isn’t the same thing as reasoning. Zero shot demonstrates reasoning as much as solving the quadratic equation for a new set of variables does; it’s simply the ability to create new decision boundaries leveraging the same set of classifying power and methodology. True agi isn’t bound to a medium — no one would say Helen Keller wasn’t intelligent for example.

I look forward to this ping :)


What exactly is the difference between pattern matching and reasoning?


I think pattern matching can be interpreted as a form of reasoning. But it is distinct from logical reasoning. Where you draw implications from assumptions. GPT seems really bad at this kind of thing. It often outputs texts with inconsistencies. And in the GPT-3 paper it performed poorly on tasks like Recognizing Textual Entailment which mainly involves this kind of reasoning.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: