Hacker News new | past | comments | ask | show | jobs | submit login
Learn X in Y minutes Where X=Prolog (learnxinyminutes.com)
130 points by rbanffy on Feb 4, 2020 | hide | past | favorite | 53 comments



This is horrible! Prolog is nothing like the language described in this "tutorial". The author has completely made up a bunch of programming language semantics that have nothing to do with Prolog.

Just for the most blatant example of made-up semantics: unification is not "essentially a combination of assignment and equality".

Unification is not assignment: Prolog predicates are immutable. You can't assign to an immutable data structure.

Unification is not equality, either. For example, in Prolog 2 = 1+1 fails because "2" does not unify with "1+1" (and not because "unification can't do arithmetic"; it's because unification is not equality).

Unification is an algorithm to decide whether two Horn clauses can be resolved with each other. Resolution is the inference rule used in Prolog's automated theorem proving. This is it:

   ¬p ∨ q, p
  ----------
      q
The horizontal line is read as "entails". p and q are predicates. The rule says that if we can show that p is false or q is true and that p is true, then we can show that q is true. If ¬p and p unify, they can be resolved and q can be shown to be true.

Resolution is used in Prolog to prove a query by refutation. Queries are the goals that the author calls "commands". These are not commands: they are predicates, just like "facts" and "rules". Prolog tries to disprove a goal by finding a clause with a head literal that unifies with the query (only "facts" and "rules" have heads, whereas goals only have bodies). If it can, it then recursively proves each literal in the body of the clause, by finding a clause with a head that unifies with the literal.

Unfortunately, none of this will make sense if you try to understand Prolog in terms of assignment and equality tests, and so on. Reading this article and trying to "learn Prolog in Y minutes" will only serve to confuse any further attempts to understand and learn the language.


As a long-time user of Prolog I didn't find the explanation so horrible. Unification is indeed like variable assignment in the constraint-satisfaction sense: bindings are not stateful. As for equality, I believe the author meant == rather than =, or more accurately the 'is' operator in Python and Perl. But maybe regular expression matching would be a better analog for Prolog's theorem proving.

Your statement 2==1+1 is (I'd claim) a weird case in Prolog. Unless you want to code the Peano postulates you need some way to do numerical calculations, and "2 is 1+1" does indeed work in the body of a clause.

Your mathematical explanation of Prolog's theorem proving is precise and correct but almost impenetrable to the average software engineer trying to "get" Prolog. Such explanations only make sense once someone already mostly understands what's going on, and hence aren't useful for beginners. As a first approximation I thought the OP's work wasn't bad.


The simplest way to test equality between two arithmetic expressions in Prolog in a syntax that anyone familiar with imperative programming languages will be comfortable with is to use =:=/2, like this:

  ?- 2 =:= 1+1.
  true.
Prolog has a few different predicates to test (dis-) equality, (non-) equivalence, (non-) unification, subsumption and so forth, and most of them (but not all) use the "=" symbol in some combination, so it's easy for a novice (or an advanced user) to get twisted into knots trying to figure out what exactly each should be used for. My strong intuition is that stating that unification is "like testing equality" will only help to confuse matters further.

The same goes for "simple" explanations that completely miss the point. Logic programming and Prolog are not simple subjects. That's not by design: theorem proving is bloody hard by nature and making a first-order logic language run on a digital computer is even bloody harder. First-order logic is undecidable in the general case and we must jump through hoops constantly to stick to the special cases where a theorem can be proved automatically and the proof procedure is sound and complete, or at least sound. Without an understanding of this fundamental difficulty in automated theorem proving, Prolog does not make sense, in the long run. For example- why those weird Horn clauses? Why not arbitrary clauses? Why depth-first search? Why unification? And so on.

It is certainly possible to learn Prolog incrementally, without first having to digest, I don't know, the proof of the Subsumption Theorem. But every small step in the way must be taken on firm ground, else the student loses her footing and falls down bottomless pits of despair. I know this from first-hand experience and there is even some scholarship on the trouble that university students get in with Prolog because they form the wrong mental models for it. Let me see if I can locate it online.

Yep, here it is:

What's wrong? Understanding beginners' problems with Prolog

https://link.springer.com/article/10.1007/BF00116441

I'm quoting a bit of the abstract:

  The problems that underlie these and other errors seem to be (a) the
  complexity of the Prolog primitives (unification and backtracking) and (b)
  the misfit between students' naive solutions to a problem and the constructs
  that are available in Prolog (e.g. iterative solutions do not map easily to
  recursive programs). This suggests that learning Prolog could be helped by
  (1) coherent and detailed instruction about how Prolog works, (2) emphasis
  on finding recursive solutions that do not rely on primitives such as
  assignment and (3) instruction in programming techniques that allow students
  to implement procedural solutions.


> This is horrible!

Normally moderators would downweight a submission when this is the consensus from knowledgeable commenters. But this thread is so good that I want to leave it up.

Edit: Should we change the link to https://www.j-paine.org/dobbs/prolog_lightbulb.html (via the inimitable DonHopkins at https://news.ycombinator.com/item?id=22242854)? It hasn't been discussed on HN before.


Well, that's certainly a better beginnger-friendly explanation that doesn't try to make up stuff. It does what I say in my second comment above: take small steps on firm ground. That's the way to do it, for sure.

I have no idea if you should change the link to point to it. What's best for HN?

Edit: I just realised who the author is of the lightbulb joke page: Jocelyn Ireson-Paine. He's the spreadsheet guy! Check out his stuff on making spreadsheets safer... with category theory (and Prolog):

http://www.j-paine.org/old_index.html#making_spreadsheets_sa...


Why horn clauses, specifically? By the way, are you aware of resources that explain Prolog the way you did (in terms of logic?) the ones that I do know either explain how to _implement_ prolog (eg. the WAM), or provide a "high-level" explanation of unification --- missing out the details that you've just pointed out. I'd love for an authoritative reference!

Another question: do you know of an easy way to _implement_ prolog? The problem with the WAM is that it is incredibly hard to debug: I had begun implementing it (https://github.com/bollu/warren) but then gave up due to getting boggled in large heap data structures that weren't doing what I wanted, and debugging this manually got far too painful :(


>> Why horn clauses, specifically?

That's a very good question. My (sloppy) answer is that satisfiability of sets of clauses is undecidable for arbitrary clausal languages, but a variant of the resolution rule, SLDNF-Resolution, is refutation-complete for sets of Horn clauses and sound (and complete) for sets of definite clauses. Definite clauses are a subset of Horn clauses having exactly one positive literal (i.e. facts and rules in Prolog). Horn clauses are clauses with at most one positive literal.

Given a set of definite clauses and a Horn goal clause, it is possible to prove by refutation that the goal is true in the context of the program, in a finite number of steps. This is how the Prolog interactive mode works: the user enters a goal clause as a "query" and Prolog tries to refute the query by SLDNF-resolution.

SLDNF-resolution is "SLD resolution with Negatation as (finite) Failure". "SLD resolution" stands for Selective, Linear Definite resolution (it's a long story). Negation as (finite) failure (NAF) accepts that a statement is true if it cannot be proven false under a closed-world assumption (that all we know that is true, is all that we can assume is true). NAF means that Prolog programs can include clauses with negated literals, which is very handy because one often wants to say things like "X is an Evil Demon Bear from Mars if X has green fur and purple-glowing eyes _unless_ X plays the hurdy-gurdy".

These are some of the basic assumptions of Prolog: SLDNF-resolution, definite clauses and negation-as-failure under a closed-world assumption. The goal is to have a package that, under some conditions, can prove a goal given a logic program. Given that, it's possible to write a logic theorem and have it execute like a program, by proving a goal.

In terms of learning resources- if you have a bit of a logic background I think that "Programming in Prolog" by Clocksin and Mellish does a good job of going over both the declarative and imperative reading of Prolog (but it's been more than 10 years since I first read it so I might be wrong).

I haven't even tried implementing Prolog. Too much hard work and it's already been done by others :) So no idea how to do it. But it's probably the easiest way to get a good set of deep insights in the language and I should probably plonk my arse down and do it, myself.


> do you know of an easy way to _implement_ prolog?

There are some examples on the Web; I'm not sure about quality, but here is a couple:

1) https://github.com/norvig/paip-lisp/tree/master/lisp

Norvig implements (some pieces of?) Prolog in Lisp in his book (book text is freely available)

2) https://github.com/prolog8/awkprolog

An implementation of Prolog in AWK.


As a hobby prolog user, I understand what you're saying, but I also think the tutorial does a good job giving a novice a decent taste of how prolog "feels", pragmatically using some terminology of the average programmer.

Sure, one may not be able to write full prolog programs after these Y minutes. But much more importantly, hopefully they are sparked with some interest in learning the language further!


Maybe he's a Moonie. They have a much different definition of Unification. Moon makes all men equal in 'divinity' to Jesus, then assigns them wives to marry.

https://en.wikipedia.org/wiki/Unification_movement


It would be great if someone with a better understanding of Prolog could submit a PR: https://github.com/adambard/learnxinyminutes-docs/edit/maste...


Does anyone know if there's a datalog-like embeddable language that is suitable for simple integer constraints (the sum of a, b, c, and d must be e; e=10.,b>2.)?

For prolog, there's apparently https://github.com/triska/clpz/

But I really just want some simple constraints-oriented form validation (you have 100 points, please feel free to by x's for 1 point each, y's for a cost of 5 points. You don't have to spend all points - with feedback of how many points are left as more x and y are bought. But for more variables, where there are clear rules that can be modeled naturally as constraints rather than with state and callbacks on change).

Ed: specifically for forms, there's https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/HTML... - but it feels like it gets messy with interdependent fields.


You might like a Satisfiability Modulo Theories (SMT) solver such as Z3.

https://github.com/Z3Prover/z3

It has bindings for other languages and is able to make some rather sophisticated arithmetical and logical deductions in an often pretty efficient way. I've used it successfully for parts of puzzle competitions and CTFs (and I'm really only beginning to scratch the surface of what it can do).

You can read more about the options in that area at

https://en.wikipedia.org/wiki/Satisfiability_modulo_theories

which also links to the related tools in

https://en.wikipedia.org/wiki/Answer_set_programming

https://en.wikipedia.org/wiki/Constraint_programming

These systems can feel pretty magical and are getting more powerful all the time. (Of course there are cases where there is a better algorithm that you could find, perhaps a much more efficient one, and using these solvers will be asymptotically or worst-case slower than it had to be given more human insight.)

Some of these could be overkill for your application, and Z3 doesn't have a Javascript binding, so I guess it's not as helpful for web forms.


Do you have any suggestions on how to get started? I remember reading "The unreasonable effectiveness of SAT solvers" or whatever it was called which showed that SAT solvers are not always the most efficient performers, but they can take you within a factor 10 in much less time. I remember thinking, "Oh, but what about SMT solvers, then?" But not getting anywhere with that.


SAT, SMT, constraint programming lie on a continuum. SMT is SAT plus machinery from constraint programming. I'm not an expert on this, just info from this video. According to him constraint programming will win in the long run because of constraint programming has larger structures which can be exploited with global constraints (SAT is only a flat sequence).

https://www.youtube.com/watch?v=YVbbNeM74lc


If your main use case is browser form validation, have a look at Tau Prolog. It doesn't have constraint library but standard Prolog should be enough for something like this.

http://tau-prolog.org/

The newly released Yarn 2 uses Tay Prolog to implement constraints on package versions and usage:

https://github.com/yarnpkg/berry/tree/master/packages/plugin...


Afair, lpsolve had both C api (you may find it horrible) and dsl. http://lpsolve.sourceforge.net/5.5/

There are few web/node-based, but I assume you googled them already.


lp_solve and coinOR project's CBC are free/open source and can bind to C or C++.

They also are released as binary executables that can be called from the command line or Excel.


Not anything embeddable that I know of, but the functionality you describe is present in SWI-Prolog, which is free software.

If it's a one-off job, I'd just use Mathematica to do it though.


Yes, I should've mentioned I found plus/3 for prolog. Not sure if it's available as something that can be coaxed into running in the browser, though.

https://www.swi-prolog.org/pldoc/man?predicate=plus%2f3

I've not been able to find something similar for datalog - but to be honest, I don't think I've managed to find solid documentation for datalog. Not that I've been able to really grok the language from enough to build an intuition for what's an idiomatic approach, at least.


No, plus/3 is just a simple addition predicate. You should take a look at the CLP(FD) functionality: https://www.swi-prolog.org/man/clpfd.html

You also mentioned CLP(Z) in the parent comment. Not sure if familiar you are with these tools, but here's a nice intro: https://www.metalevel.at/prolog/clpz

Avoid GNU Prolog. It sucks.


Thanks for the tip.

Ed: the introduction looks like exactly what I need from the prolog side, last I looked at this stuff in prolog, I only found some proof of concept/instructive texts using Peano numbers - which works after a fashion, but is slow and painful.

There's nothing analogous for datalog or some other minimal, embeddable logic/predicate system?


Hm, maybe this could be combined with reasonml/ocaml js target...

https://github.com/Emmanuel-PLF/facile


Tau Prolog[0] runs in the browser, but it's still a bit heavy for forms.

What someone needs to do is take an implementation of Hitchhiker Prolog[1] like this one[2] – because it's so lightweight – and make it JavaScript developer friendly so one can use it on a web page without a second thought.

[0] http://tau-prolog.org

[1] http://www.cse.unt.edu/~tarau/research/2017/eng.pdf

[2] https://github.com/CapelliC/hitchhicker-prolog


I have been thinking of Prolog as SQL, but the “goals” are your data generator/table and the constraints are everything in there WHERE-clause. Made it simpler to reason about.

Learning Erlang was also easier when I figured out that it is (was?) build on Prolog, with its analogy to SQL.


There is a subset of Prolog called Datalog, explicitly designed for use in databases. Datalog queries are guaranteed to terminate, and unlike Prolog it is fully declarative (order of declarations doesn't matter).


Datalog is implemented in many Clojure(Script) databases if you'd like to try it out, Datahike is my favourite for dev/learning use but Crux, Datomic, Datascript and others exist


I think it's very useful to think of Prolog more as a simple database with a very powerful query language, for people who are already familiar with some programming languages and SQL databases. It's a better angle to come at it and understand the unique properties of Logic Programming.


Regarding Erlang, yes early versions were implemented on top of Prolog and the heritage is there for sure. Joe Armstrong talked fondly about Prolog in later years as well. For example, he described Elixir’s beloved pipe operator like this:

> This is the recessive monadic gene of Prolog. The gene was dominant in Prolog, recessive in Erlang (son-of-prolog) but re-emerged in Elixir (son-of-son-of-prolog).

(He was referring to so-called definite clause grammars (DCG) in Prolog.)


In terms of Erlang, my understanding is it began life as a modified Prolog, implemented in Prolog, around '86. But by the end of the decade Joe had created JAM, Joe's Abstract Machine, with others writing a runtime in C and the libraries. Nowadays Erlang runs on BEAM, which has since been improved and heavily optimized.


Most of the Prolog books are thick academic textbooks, for a more modern take with YouTube videos, take a look at https://www.metalevel.at/prolog


For textbooks, The Art of Prolog is highly accessible (https://mitpress.mit.edu/books/art-prolog), and The Craft of Prolog (https://mitpress.mit.edu/books/craft-prolog) is not really appropriate for self instruction but is simply a delight to read (and you'll learn a lot of really, perhaps sometimes excessively, clever Prolog by reading it).


Another good resource is this Prolog text adventure game. Helped a lot of things click for me. https://gist.github.com/JosephLenton/3191695


Or Adventures in Prolog.


I'm told Prolog is the old way.

miniKanren (by William Byrd et. al.) is trending for many years though I'd like to know what's fundamentally new about it.

The fact that Byrd later worked on constraint'ed miniKanren (cKanren) means miniKanren is not as powerful?

His recent talk about using it with Barliman frontend was pretty cool [1]

[1] https://www.youtube.com/watch?v=RVDCRlW1f1Y


I always thought minikanren was just a toy Prolog implemented in Lisp and then other languages.


Definitely not the intent: http://minikanren.org/minikanren-and-prolog.html.

(I also doubt it's the result, though I've not done enough with either language to comment too much).


I read the article and small/minimal...etc isn't that much different than a toy in some respects, but see your point of intent. I've seen a few implementations and didn't get the feeling of "production worthiness".


Constraint programming is the new way, with Minizinc as the language. OR-tools from google is the best current solver.

https://www.minizinc.org/


I think the Adventures in Prolog book is the best way to learn where you code up a text adventure game. It made things click much better than any tutorial for me.

In general though I really like "learn x in y" as a resource.

I like the concept of Prolog too, but to me it makes very hard things easier and easy things hard. So it doesn't make sense in many situations to use.


I always wanted to write a compiler whose front-end consumes Prolog and back-end emits PostScript, and call it "PrologueToPostscript".

prologue: a separate introductory section of a literary, dramatic, or musical work.

postscript: an additional remark at the end of a letter, after the signature and introduced by ‘PS’.


What are the differences from Prolog for a constraint programing language such MiniZinc?


MiniZinc is a purely declarative problem description language. It's a way to describe a problem once, and feed it to different solvers easily. You can provide hints for the search, but there is no more control on the solving process.

Prolog is a "full" language. You can program any search strategy you want with it.

I'd recommend starting with MiniZinc first, as it's easier. If it's not enough and you need to customize the search strategy then once can move to Prolog, particularly a modern one with constraint support.


Any suggestion on what Prolog to use?


Eclipse CLP (no relation to the IDE ;) has good built-in support and can also interface to many external solvers. It's open source.

https://eclipseclp.org/


MiniZinc is a generic language and IDE for optimization problems. You can hook into a variety of solvers (LP, MILP, Nonlinear, and some kind of like Prolog). The solvers have to be able to read the the flatzinc file type that MiniZinc exports if they don't support native hooks.


What is y?


No.


How many Prolog programmers does it take to change a lightbulb?

false.

https://www.j-paine.org/dobbs/prolog_lightbulb.html


Stop joking or someone's going to cut you.

(That pun wasn't very satisfying, was it?)


I think you have to learn prolog to find out.


It is a serious good articles to summarise certain language


What is a good web framework or library for Prolog?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: