Hacker News new | past | comments | ask | show | jobs | submit login

Indeed - this is often forgotten but just like the fashion of necktie widths and the length of hemlines, so the AI pendulum swings from symbolic representation to ML. I believe it will return, eventually, and at it's foundation rests LISP or at least the core tenets.

Prof. Rodney Brooks (MIT, Robot Lab), who is famous for his subsumption architecture (SA) and against Minsky's central-model of representation (arguing instead for radically separate distributed systems), wrote SA and nearly all of his research in LISP. In fact, Brooks wrote a book on LISP programming and developed his own efficient LISP engine. Many, many of his grad students have gone on to become leaders of the AI (not ML) world.

Curiously, I am now reading "The Elements of Artificial Intelligence - An Introduction Using LISP" which depicts a "Knowledge Engineer" which stirring a cauldron labelled "Expert System". Copyright is 1987. It's a joy to see how far we've come in some respects, but how little progress in others. Perhaps this represents a measure of the maturity of certain subdomains?

I suspect in another decade "we" will rediscover the wisdom of those who have developed symbolic knowledge representation.




It is hard for me to see symbolic systems making a comeback in the area of AI, instead I think they could become more popular as direct programming abstractions. Advancement in ML comes with more data and faster hardware, and I think it is obvious now that no natural intelligence works symbolically.


The phrasing of pendulum swinging in relation to symbolic was also used by Micheal Jordan here (search for pendulum): https://medium.com/@mijordan3/artificial-intelligence-the-re...

It's clear that AIs based solely on FOL are unlikely, but it's also likely to be true that any system needing to solve problems whose exact solution can't be found in steps polynomial in inputs, will require ideas that are similar to the core of the older approaches. There are problems where wide-ranging search can't be avoided.

Other nice perks of the hybrid approach are data efficiency, compact specification, easier composition, the ability to grow or alter your representation and generate new inferences on the fly (I mean inferences as a result of learning or conditioning on new information and not what people use when they mean prediction). If you learn a new fact, you can go back and explicitly work out the consequences on all the other facts and the inferences generated from them as well as generate new inferences that might not have existed before. These are more easily done with "symbolic" representations.

Combining with the strengths of DL based approaches: more robust, learns complex mappings, can learn non-trivial computations, can exploit indirectly specified structure, can approximate probability densities if trained correctly, will get the best of all worlds.

Here's a short presentation on the topic from a recent workshop: (and if you can, it's worth checking out the other presentations too) https://www.youtube.com/watch?v=_9dsx4tyzJ8


> and I think it is obvious now that no natural intelligence works symbolically

Not even at the level of language? I don't think it's obvious yet that ML can scale to doing all the things as well as natural intelligences. It's had it's successes, yes, but those are still in limited domains. There's no general purpose ML AI yet.

The difference between AlphaGo and human Go players is that while AlphaGo is superior at the game, you can change the game in arbitrary ways that the human players can easily learn and adapt their play for, but would require programmers modifying the code for AlphaGo. It can't just learn to perform any arbitrary task.

ML performs extremely well in very well defined settings, but computers have always been better than humans in those kinds of domains. That's why we invented them.


Why would human transferable intelligence have anything to do with symbolics or language? Computer languages are already far more effective at problem solving than human languages, hardly even thinks of that as AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: