Hacker News new | past | comments | ask | show | jobs | submit login
Artificial Intelligence with Erlang: The Domain of Relatives (2007) (erlangcentral.org)
96 points by jxub on April 27, 2018 | hide | past | favorite | 31 comments



Just to point out this is discussing 3rd party Erlang library:

https://github.com/afiniate/seresye

This not some built-in language feature or a bundled library from the distribution. It's still pretty cool, I remember playing around with it a few years back.


It's somewhat interesting to see Erlang go back to its roots, at least the syntactical ones. Erlang borrowed most of its syntax from Prolog, after all. [1]

[1] http://erlang.org/faq/academic.html


The first Erlang platform was even written in Prolog.


Isn't this logic programming (ala prolog), not machine learning? I don't think this article reflects what people usually mean when they say AI, or am I missing something? I haven't really done any machine learning or logic programming, so could be totally off, but glancing at this was confusing based on the title.


> Isn't this logic programming (ala prolog)

Kind of, though it's not exactly the Prolog flavor. In Prolog you also define such facts and rules, but then you only derive new facts "on demand". That is, if you ask whether Bob is the father of Jane, the system goes off and tries to find out whether that's the case. This is called "backward chaining" (https://en.wikipedia.org/wiki/Backward_chaining).

In contrast, the system as presented takes a set of facts and rules and automatically computes all of their consequences, before you ever get to specify what questions you want to ask. This is "forward chaining" (https://en.wikipedia.org/wiki/Forward_chaining). One similar system to the one presented here that implements forward chaining is CHR (https://en.wikipedia.org/wiki/Constraint_Handling_Rules), for which several implementations exist... including in Prolog.

Both approaches have certain advantages and disadvantages depending on the application in question.


So more like CLIPS?


Datalog is the canonical forward chained LPL.


OK, but there are also ones with fewer limitations.


AI and ML were separated for a long time, where AI essentially meant rule-driven behavior like logic programming (e.g. during the fifth generation computing project). Then in the last 10 years ML became really successful, and AI came to mean just that.


> where AI essentially meant rule-driven behavior like logic programming

I think the full history is a little more complex.

In the early days, there were two camps: the "connectionists" who worked on neural-net type stuff, and the symbolic reasoners who worked on hand-authored rule-based systems. Both fell under the "AI" umbrella, believed their approach was the one true one, and squabbled over funding and public perception. (Because public perception affects funding.) Remember that at the time, much AI research was government or defense funded, so politics was heavily involved.

The connectionists invented neural networks. The symbolic folks gave us Lisp, Prolog, and a lot of compiler and parser theory stuff.

The connectionists hit a wall in the sixties, and shortly after "Perceptrons" was published. That book deliberately pointed out the current limitations of neural networks and effectively shut down research into them for decades. It was one of the causes of the "AI winter" of the 80s.

After that, "AI" became roughly synonymous with symbolic reasoning and rule-based expert systems because that camp had won.

Then, in the 80s, backpropagation and other learning techniques for neural nets were finally figured out and those researchers started making progress again. Two AI winters had happened by then, so "AI" didn't have all of the positive connotations it used to (at least when it comes to funding) and the term almost solely referred to symbolic reasoning at this point, so they started using "machine learning" to refer to neural-network-based AI.

In the early 2000s, big tech companies found themselves with lots of cheap computational power and tons of data on their hands, the two key ingredients to make machine learning useful. Meanwhile, symbolic reasoning and expert systems had petered out.

So "machine learning" got bigger and bigger until eventually it became the main computer intelligence approach in town. On top of that, it's gotten smarter and smarter until the public has started associating it with the old image of what "AI" means. So now you see "AI" coming back to refer to what is, essentially, the same connectionist approach it used to include in the 60s.


Connectionism didn't falter because people couldn't figure it out, it faltered because people figured out they needed 1000x more powerful computers, which took a few decades to build.


It is also worth noticing that the actor model, which Erlang is based upon, was first presented at a major Artificial Intelligence conference! [1] Really, the ML=AI thing is a very recent trend.

[1]C. Hewitt, P. Bishop, and R. Steiger, “A Universal Modular ACTOR Formalism for Artificial Intelligence,” in 3rd International Joint Conference on Artificial Intelligence., San Francisco, 1973.


LISP was originally a language for doing AI, Mcarthy even invented the word. Although it is safe to say that all the hype around AI today is of the ML variety, while the symbolic variety has become a pariah.


Indeed - this is often forgotten but just like the fashion of necktie widths and the length of hemlines, so the AI pendulum swings from symbolic representation to ML. I believe it will return, eventually, and at it's foundation rests LISP or at least the core tenets.

Prof. Rodney Brooks (MIT, Robot Lab), who is famous for his subsumption architecture (SA) and against Minsky's central-model of representation (arguing instead for radically separate distributed systems), wrote SA and nearly all of his research in LISP. In fact, Brooks wrote a book on LISP programming and developed his own efficient LISP engine. Many, many of his grad students have gone on to become leaders of the AI (not ML) world.

Curiously, I am now reading "The Elements of Artificial Intelligence - An Introduction Using LISP" which depicts a "Knowledge Engineer" which stirring a cauldron labelled "Expert System". Copyright is 1987. It's a joy to see how far we've come in some respects, but how little progress in others. Perhaps this represents a measure of the maturity of certain subdomains?

I suspect in another decade "we" will rediscover the wisdom of those who have developed symbolic knowledge representation.


It is hard for me to see symbolic systems making a comeback in the area of AI, instead I think they could become more popular as direct programming abstractions. Advancement in ML comes with more data and faster hardware, and I think it is obvious now that no natural intelligence works symbolically.


The phrasing of pendulum swinging in relation to symbolic was also used by Micheal Jordan here (search for pendulum): https://medium.com/@mijordan3/artificial-intelligence-the-re...

It's clear that AIs based solely on FOL are unlikely, but it's also likely to be true that any system needing to solve problems whose exact solution can't be found in steps polynomial in inputs, will require ideas that are similar to the core of the older approaches. There are problems where wide-ranging search can't be avoided.

Other nice perks of the hybrid approach are data efficiency, compact specification, easier composition, the ability to grow or alter your representation and generate new inferences on the fly (I mean inferences as a result of learning or conditioning on new information and not what people use when they mean prediction). If you learn a new fact, you can go back and explicitly work out the consequences on all the other facts and the inferences generated from them as well as generate new inferences that might not have existed before. These are more easily done with "symbolic" representations.

Combining with the strengths of DL based approaches: more robust, learns complex mappings, can learn non-trivial computations, can exploit indirectly specified structure, can approximate probability densities if trained correctly, will get the best of all worlds.

Here's a short presentation on the topic from a recent workshop: (and if you can, it's worth checking out the other presentations too) https://www.youtube.com/watch?v=_9dsx4tyzJ8


> and I think it is obvious now that no natural intelligence works symbolically

Not even at the level of language? I don't think it's obvious yet that ML can scale to doing all the things as well as natural intelligences. It's had it's successes, yes, but those are still in limited domains. There's no general purpose ML AI yet.

The difference between AlphaGo and human Go players is that while AlphaGo is superior at the game, you can change the game in arbitrary ways that the human players can easily learn and adapt their play for, but would require programmers modifying the code for AlphaGo. It can't just learn to perform any arbitrary task.

ML performs extremely well in very well defined settings, but computers have always been better than humans in those kinds of domains. That's why we invented them.


Why would human transferable intelligence have anything to do with symbolics or language? Computer languages are already far more effective at problem solving than human languages, hardly even thinks of that as AI.


"AI" still means "Artificial Intelligence", a broad field that includes machine learning. If you think otherwise, I would like to hear why, please.


All terms have mutable associations over time. For example, sure HCI used to be a subset of PL, but very few would make that association today.


Meaning of terms is mutable and mootable :-)


Logic programming is part of what people now call GOFAI (Good Old Fashion AI). I think it still has a place in building intelligent systems, but these days most people equate AI to Machine Learning, unfortunately ...


I'm curious, why do you say it's unfortunate? (Don't get me wrong I am also of that opinion, more or less, so I'm just curious about yours)


Check out this paper by Turin-award winner Judea Pearl for fundamental limitations of current approaches to ML: https://arxiv.org/abs/1801.04016


I think that argument ignores that neural nets can learn the models that Pearl believes are important. And it's quite plausible that human reasoning works in similar way.

Pearl's career is all about causal networks, so he's slightly biased when it comes to survey like this.


So you are saying that Pearl's argument that current ML system cannot reason about interventions and retrospection (as he defines them) are wrong?


>> I don't think this article reflects what people usually mean when they say AI, or am I missing something?

If by "people" you mean the lay press and people who are not AI scientists, then maybe. But AI scientists know their history, usually and can place rule-based systems firmly within AI.

>> I haven't really done any machine learning or logic programming, so could be totally off, but glancing at this was confusing based on the title.

I understand your confusion but my intuition is that you are confused because you are only aware of very recent reports on AI, that focus entirely on machine learning.

It might help to clear up the confusion if you pick up an AI textbook, eg. Russel and Norvig (AI - A Modern Approach).


At 9-23 years old, it's not a very "Modern" approach anymore.


Logic programming has no learning component. It's an AI algorithm. Learning is based on datasets or dynamic environments.


Should be marked (2007). The bulk of the content is from then, with inconsequential formatting changes in 2013: https://erlangcentral.org/w/index.php?title=Artificial_Intel...


Ok. Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: