Hacker News new | past | comments | ask | show | jobs | submit login

I would like to know what Prolog implementation the author used. In particular, the article claims:

1. > An initial analysis found that we would need to implement a complex depth/breadth search algorithm either in the client application or in SQL.

2. The Prolog runtime would efficiently solve this problem given rules that naively described it.

I am skeptical, as this is emphatically not my experience with Prolog. In my experience, for any problem requiring a complex search strategy, Prolog is not your friend. Most Prologs do something approximating a naive depth-first search with unification, and on complex search problems this approach rapidly blows up. This can _sometimes_ be fixed by:

1. Making careful use of cut (`!`) or other non-declarative operators to change the search order.

2. Using tabling, essentially an advanced form of memoization. But only some Prologs support tabling, and it's useful only for some sorts of problem.

However, the author mentions none of these. Are they glossing over something, or is my (probably more limited) Prolog experience uncharacteristic? Are there "smart" Prologs out there I'm unaware of?




(I am the author.)

The implementation I used was SWI-Prolog. I had tinkered with a couple of others prior to that in school (sorry, I don't remember more details), but this was free/open, and up to the job at hand without requiring an acquisition process from the customer of the system.

About the search strategy; yes I had to use a few cuts to get acceptable efficiency. Still, this was much easier than re-creating the same thing in SQL or imperative application code. The particular details, I don't remember well; sadly I've had little occasion since 2010 to use Prolog.


Hi Kyle, Peter Sherman here. (I posted this HN article). You and I had an email dialogue around 2012, and of course I remember you from your Delphi days (BDE Alternatives Guide, etc.) and the old Joel On Software forum. Anyway, the other day I was reading about CoQ and it's use for formal verification, which turns out to solve problems in various different diverse domains, and I said to myself, "Hey, this sounds a whole heck of a lot like Prolog, plus a type system", and so I was attempting to compare and contrast the two systems when I thought back to your great video detailing the customer problem you had and how you used Prolog to solve it. So, that's why the link appears on HN now, that's why the lag time -- to answer your other question. Hey, if you've got a spare minute or two, you should shoot me an email, peter.d.sherman AT gmail DOT com. I might have an interesting business proposition for you...


On rereading, it seems like what may be happening is simply that the problem isn't very large (10-100,000 rows) and doesn't need to be solved very fast (a few minutes), and so the standard Prolog search strategy may have worked just fine. A good example of avoiding unnecessary over-engineering.

Without knowing the details of the problem it's hard to be sure, of course. Combinatorial explosion can make even searches involving a rather small number of choices take impractically long to finish. But that depends on the structure of the search problem.

Edit: The author says in the comments on the original post that he used SWI Prolog, which I believe doesn't do anything special in its search strategy, although it does have support for CLP.


Part of searching is knowing how to reduce your search space. This is not just a prolog problem but a CS problem. Knowing your data, using cut, swiprolog might not have tabling support but it's easy to memorize with assertz.

I've used prolog on production systems, there's odbc drivers, he could have connected directly to the DB and ran his rules. Checkout the swiprolog libs, they are quite extensive


>> Most Prologs do something approximating a naive depth-first search with unification, and on complex search problems this approach rapidly blows up.

Most modern Prologs use indexing on predicates' symbols and arguments to speed up queries and the days when Prolog was a "slow" language are long gone by.

As to the search strategy, it's a bog-standard, depth-first search (no approximation). Unification is part of the theorem-proving algorith, SLD-resolution. Depth-first search is used to find "resolvents", i.e. candidates for resolution (ish).

The parent post describes a backward-chaining problem, where rules must be selected depending on their arguments, most likelly to perform some complex reasoning procedure. In that context, search is required to select relevant rules at each step of the reasoning process- not, say, to search a large database for all records of persons with a name starting from "m".

For this kind of use-case, Prolog's search is not only perfectly adequate, but also nearly perfectly optimised, due to long years of development as a language.

That said, "searching a database for all records starting from m" is very much like searching for resolvents and thanks to modern practices, like indexing, Prolog can do that kind of search just fine also.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: