Hacker News new | past | comments | ask | show | jobs | submit login
The Art of Prolog (1994) (mitpress.mit.edu)
139 points by tosh on June 21, 2020 | hide | past | favorite | 52 comments



It should be noted that under the open access tab one can download the book.

Aside from being one of my favorite programming books ever, and on par with SICP and PAIP I think, the font they used is really beautiful [1]. The PDF perhaps does not do it justice. On paper it's really crisp and pleasant to read.

I hope MIT Press opens up The Craft of Prolog [2] too, a fantastic sequel to The Art of Prolog.

[1] https://docs.microsoft.com/en-us/typography/font-list/lucida...

[2] https://mitpress.mit.edu/books/craft-prolog


It looks like the PDF is a 1-bit scan, so the text is a bit wobbly. Though I concur that Lucida Bright is lovely.

I wonder if the MIT Press — or the author — has the source files (presumably LaTeX) somewhere. It would be a shame to have a low-quality scan as the "definitive" archived version.


The flyleaf (first ed, 3rd printing, 1986) says

    This book was set in TeX by Sarah Fliegelmann
    at the Weizmann Institute of Science
    and printed and bound by MIT Press
That might be another lead.


I couldn't find it under Open Access. It's unfortunate. At a $104 I can't afford to buy. But I would love to read -- I would gladly pay $20 or so for an ebook.


Don't try to locate it in “Open Access” at the top.

Look the page in a desktop browser. Directly below the book's cover, there's a “Resources”. To the right of that “Resources”, there are “Overview”, “Author(s)”, “Open Access”; and that “Open Access” is the one you want.


Skip to page 42 for the start. Unless you want to read the forwards and prefaces.


thanks mate! I was already heading over to thepiratebay!


If anyone is interested in learning Prolog I can recommend two very good Prolog books: The Power of Prolog (from the author of the video)[1] and Simply Logical: Intelligent Reasoning by Example[2]. I also recommend visiting the Awesome Prolog list[3]. Worth checking also interesting extension of the Prolog - Probabilistic Prolog, aka ProbLog[4]. And modern ISO-compatible implementation in Rust language - Scryer Prolog[5].

[1] https://www.metalevel.at/prolog

[2] https://book.simply-logical.space/

[3] https://github.com/klaussinani/awesome-prolog

[4] https://github.com/ML-KULeuven/problog

[5] https://github.com/mthom/scryer-prolog


I gave away most of my technical books a few years ago, but I kept The Art if Prolog and about 50 other classics.

It has been a while since I worked with Prolog professionally, but the memories are fond/good.

Good to see that the PDF is available for download.

For people who craft their code in languages like Python, Java, JavaScript, etc., I recommend just playing with an alternative language, whether it be Scheme, Common Lisp, Prolog, Haskell, ML, etc. Sometimes just a few evenings working through tutorials on the web is enough to expand the brain.


How did you decided that it was time to give away your technical books?


(I'm not Mark) but I would expect it was an issue of space. About 6 years ago my wife and I moved from a house with a full basement to a condo. The house had 1200 sq feet, and the basement added another 1200 sq feet. The condo had 1400 sq feet. I had to get rid of about 1000 books and my wife had to get rid of more than 2000 books.

I'm OK with ebooks (I actually prefer them), but my wife loves the feel of a real, physical book in her hands. Because of Covid-19 she has been reading more ebooks, but as soon as the library opened back up (with limitations) she went back to requesting as many books as she could.


As inetsee said, it is because of space. I got rid of a whole wall of shelves in my home office to make room for a meditation area and room to have my instruments out and handy.


Instruments?


I take it Mark is a musician. Or he likes to have a room to probe with things?

Let's go with musician.


Exactly right. I play didgeridoo, American Native Flute, and guitar.

[1] https://markwatson.com/fun/


Thanks Mark. I didn't know if it was music or electronics or what lol.


I've been trying to push myself to learn Prolog, using this book, because Prolog seems like a very elegant way of modeling the interactions between business rules, compared to the bespoke functions I typically write during web app jobs.

I have no complaints about the book, but I've been struggling to figure out an easy setup for "Prolog as a web app backend." Typical Prolog implementations are not designed to be used as concurrently-accessed databases. Datalog seems to be a nearby technology that is more pointed towards that use case, but even then, there seem to be scant few implementations of Datalog databases with transaction support, and near zero that are open source.

RDF/OWL databases seem to be another conceptually-near possibility, but they also seem to have few implementations that are prepared for high-scale concurrent access.

Overall, it seems that there is a decent amount of interest in "logic-type" query languages as a curiosity, but little attention to implementations that support use in OLTP business applications, which makes adoption difficult.

Does anybody have any advice?


I recently participated in an online Prolog class by Anne Ogborn. It utilized SWI-Prolog [2] and SWISH [3], and mentioned Pengines [4] for web. I haven't used it personally, but that is the pointer I have for Prolog + web.

[1] https://twitter.com/AnnieTheObscure [2] https://www.swi-prolog.org/ [3] https://swish.swi-prolog.org/ [4] https://pengines.swi-prolog.org/docs/index.html [5] http://pathwayslms.com/swipltuts/ [6] https://www.youtube.com/watch?v=JmOHV5IlPyU


I'm also curious about this, and what "real world" Prolog actually look like.

On one hand, I think the paradigm really simplifies some things, but then on the other hand, everything else (aka routine programming tasks) seems much more convoluted and awkward.

I think PAIP may have been on to something, embedding Prolog in Lisp, but that's less elegant and more difficult in most other languages.


Swi-Prolog, a popular, free and open source implementation of Prolog, has good support for web development. The Swi-Prolog website itself is written using Swi-Prolog's http libraries.

There are a couple of tutorials listed under Tutorials > Web applications on the website. Probably the place to start is:

Tutorial — Creating Web Applications in SWI-Prolog

http://www.pathwayslms.com/swipltuts/html/index.html



Agree - rules engines are legion in commercial envs and a decent language or accessible non Java reit engine is something on my shopping list too.


I've got a few books like this now on my to-read list (SICP, etc).

But in a profession that revolves around getting jira tickets into the Done column while my boss hovers over me with a stopwatch (so to speak), and then cramming leetcode for interviews when it's time to hop from one shitshow for the next ... I've come to the sad conclusion that, in this season of in my life, I just can't justify spending the additional time on such books.

These books might make me "a better programmer" for some amorphous, platonic definition thereof, but in the year 2020 will these actually help me to materially progress career-wise?


I can sympathize with this perspective. One strategy that I try to employ is to find books which are both small and good. In the case of Prolog, Clocksin and Mellish's "Programming in Prolog" falls in that category. The Art of Prolog is a great book but it's a big book and you could probably spin your wheels on it for years, whereas a smaller book can conceivably be digested even if you have a crazy day job and kids and non-programming hobbies, etc.

Regarding just how it might make one a better programmer (while intentionally not answering anything about how it can "materially progress" one's career), I think that learning Prolog can make one a better problem solver because it forces you to specify the solutions to problems in terms of their truth conditions. This, in turn, makes you more likely to tackle problems by constructing mini proofs. I find this to be valuable no matter the language or language family I happen to be working in and it's a habit I definitely learned from writing Prolog.


I doubt that reading a few books will help you materially progress career-wise. From my personal experience, my interest in Prolog has absolutely shaped my carreer, but not because I read a few books (or more). Rather, because it got me interested in computer science as a subject in its own right.

So here's my personal experience. I first learned about Prolog in a second year CS course at uni. I got hooked, much like other folks get hooked on Lisp and Haskell etc. I stopped saying things like "nobody uses Prolog in the gaming industry", like my classmates, and started asking questions like "wait, if facts and rules are predicates, then what are queries? [1]". So I started reading every textbook on the language that I could find in the university library. Most of them were AI textbooks also, from the time when AI was basically hand-crafted programs written in Lisp and Prolog. From that I got interested in AI, in general and -after working for a few years in the industry- I started an MSc course on AI.

My MSc course was almost exclusively about statistical machine learning (I was told they were planning to rename it to "data science"). I started the course (part-time, while working) in 2014, right around the time when machine learning started becoming a hot issue; a rather fortunate state of affairs. I didn't go back to the industry with my hot new skillz though. Instead I started a PhD on Inductive Logic Programming, which is basically machine learning of logic programs, a subject that perfectly fits my background (in logic programming and machine learning).

And that's where I am now- I'm in my final year and waiting to hear from the editors of the Machine Learning Journal about my submitted paper that describes a polynomial time ILP algorithm (exponentially growing hypothesis spaces are a bitch, but you can avoid them with One Simple Trick). We'll see how that goes- one does not simply publish to the MLJ. But if I get that publication it will be a major step forward to getting my thesis accepted.

Er. Once I've actually written it.

Anyway, morale of the story: don't just read a bunch of books. Find a deep interest in computer science instead. For me, my undergraduate wonderment at Prolog led me down the path of becoming a computer scientist, growing my knowledge and honing my skills. There's no way you can go wrong with that.

(I forgot to mention I'm doing my PhD at one of the top rated universities for engineering research; not bad for someone who started as an unskilled immigrant working at a warehouse :)

_________________

[1] Also predicates. Mind blown.


Can't speak about the Prolog books, but SICP helped me to learn (and to some degree understand) LISP. That in turn made it easier for me to learn other languages or certain concepts of other languages.

Sure, you can learn French and grammar in general without knowing Latin and most do, but it sure is easier if you do.


My favorite quote from the book:

In Prolog programming (in contrast, perhaps, to life in general) our goal is to fail as quickly as possible.


And for people who may not have known: the initial version of Erlang was implemented in Prolog, and the let-it-crash philosophy is central to Erlang.


While both of the statements you made are true, the "let-it-crash" philosophy of Erlang seem quite orthogonal to the "fail-fast" of Prolog.

"Let-it-crash" had to do with the idea that a wobbling process/service is harder to design around than one that's just definitively dead and also that most non-systematic errors in running programs are "Heisenbugs" (the problem is transient) and restarting the process/service will make it go away.

In Prolog the point of "fail-fast" is to determine as quickly as possible if some search branch can't possibly provide a True result, so that you don't spend unbounded time searching for every possible version of false.


I think these philosophies are complementary, I could easily see one being the inspiration for the other, just are there are analogs between guard expressions and pattern matching with logical predicates.

Ideas are contagious.


Just to add a bit to that, since not many people know this part...

Joe Armstrong and S.R. Virding wrote a case study chapter for the book, "Strand: New Concepts in Parallel Programming," and it has a section explaining the history of Erlang up to 1990.

The first Erlang prototype, written in Prolog, ran into performance problems with backtracking and emulated concurrency, so it was ported to a Parlog system (a parallel Prolog) written by S.R. Virding.

The Parlog implementation was still too slow, so a third implementation was created that compiled Erlang into Strand88, a commercial product from Strand Software Technologies, Inc. which claimed to be the first commercial parallel logic programming language. That implementation was finally fast enough that Erlang could be used outside of the Computer Science Lab at Ericson.

Presumably at some point after that it was re-written at least one more time...


I will have to read this, should I fine the time.

I learned Prolog in High School, it was a challenge at first -- I didn't get it right away. But when I did I was amazed.

Yet I have an ongoing problem with Prolog that I have never fully put my finger on exactly -- it's something about how side-effects intermix with pure logic. I find it makes the flow of activity hard to follow.

Can anyone else relate to this and perhaps explain it better?


Do you mean side effects in the sense of I/O, or updating the database, or do you mean the binding of logic variables?


Somewhat related is Curry https://www-ps.informatik.uni-kiel.de/currywiki/

It builds off the concepts of prolog in a purely functional strongly typed way. The main compiler, PAKCS, actually targets prolog. Hard to wrap your head around without knowledge of Haskell but with taking a look at.



    DEBORDEMENT DE PILE


No, stack overflow is in the other tab.


I don't know prolog at all. Does anybody have a TLDR to what its pros & cons are and why it isn't mainstream?


I think of Prolog programs as collections of constraints.

Now, a lot of people, when they hear the term "constraints", think of arithmetic inequalities. And indeed, there are "Constraint Logic Programming" extensions to Prolog that support such constraints. But that's not what I'm referring to. Even in the absence of such extensions, Prolog is a constraint language, but the constraints are structural rather than numerical.

Let's take a simple example, the Prolog code that defines list appending:

  append([], X, X).
  append([X|Y], Z, [X|W]) :- append(Y, Z, W).
This says, first, that 'append' is satisfied if its first argument is empty and the second and third are equal; and secondly, that it's also satisfied if the first and third argument have the same head, and 'append' is satisfied recursively on the tail of the first argument, the second argument, and the tail of the third.

Prolog allows any of those arguments to be inputs, and any to be outputs. The way this works is that variables in structures function as named holes to be filled in. So while you can use 'append' as a function to append two lists, like this:

  ? append([3], [7], X).
  X = [3, 7]
you can also use it as a predicate:

  ? append([3], [7], [4]).
  no
Here "no" means Prolog couldn't find any way to satisfy the constraint. Or, you can use it "backwards":

  ? append(X, [7], [4, 7]).
  X = 4
Prolog doesn't even know what's normally an "input" and what's an "output"; it just searches for a solution to the constraints. It can even construct multiple solutions:

  ? append(X, Y, [4, 7]).
  X = [], Y = [4, 7];
  X = [4], Y = [7];
  X = [4, 7], Y = []
Here's the really powerful part. These solutions are produced one at a time. Some larger goal can be consuming them, such that if it fails, Prolog will try the next one automatically. Thus it's very easy to put together a large, complex constraint and have Prolog search for a solution.

The catch, as others have mentioned, is that it does that using a rather naive depth-first search. Even though, at the lowest level, it tries each alternative very quickly, there can easily be a combinatorial explosion of alternatives. To use the language effectively requires the programmer to learn strategies to avoid these explosions. So while in small examples the language seems very declarative, and indeed a lot of code can be written without too much attention to its procedural behavior, eventually this breaks down and you have to think about it procedurally as well.


This is something that unfortunately doesn't have a straightforward answer. Prolog ended up being very influential, but its very nature meant it was never going to become mainstream.

Prolog was intended to be a way to program using logic. It did this in a way by encoding facts in a manner not dissimilar to a relational database; logical statements about the world that resembled functions using Horn clauses (which you can think of as mathematical functions, but applied to logic); and unification, which is what links all this together. Unification is computation via the idea that you have a set of data, a bunch of logical statements, and you want to find everything in the data that satisfies those logical statement.

If you know SQL, Horn clauses should sound a lot like materialised views.

As to why it never really went mainstream, it's a matter of wrong place, wrong time. It didn't help that the language needed a checkpointing mechanism (referred to as 'cuts', and represented by '!') because the unification mechanism could get confused and do much more work than it needed to, so developers needed to put cuts in place to tell the interpreter where it could avoid backtracking to try more solutions within a Horn clause.

If you want to get a taste for it, maybe give this a try: https://github.com/norswap/prolog-dry

Prolog is one of a small number of languages that people should learn even if they never use it in a practical way.


>> As to why it never really went mainstream, it's a matter of wrong place, wrong time. It didn't help that the language needed a checkpointing mechanism (referred to as 'cuts', and represented by '!') because the unification mechanism could get confused and do much more work than it needed to, so developers needed to put cuts in place to tell the interpreter where it could avoid backtracking to try more solutions within a Horn clause.

Prolog backtracks when there are more solutions to a goal (think of a goal as a query). Not because "unification gets confused". Unification doesn't have anything to do with backtracking, rather backtracking is a mechanism to represent the natural nondeterminism of logic theories, where multiple variable instantiations ("bindings") might satisfy a formula.

So, when you enter a query at the Prolgo top-level (the console) Prolog will try to find a set of variable bindings that refute your query in the context of your loaded program. If it can, it will report "false", i.e. the query cannot be satisfied in the context of the program. If it can't, it will report "true", i.e. the query can be satisfied in the context of the program and it will also report the variable bindings that make the query true. If there are more than one sets of bindings that make the query true, you can ask for more solutions. At that point Prolog will backtrack and give you more solutions.

However, there is nothing forcing your program to be written so as to produce multiple solutions to a query. It's entirely up to you and how you write your program. You can write a program so that it's deterministic (succeeds or fails exactly once) or not. The cut (!/0) is one way to force a program to become deterministic, but again that's not because anything gets confused. It's because sometimes it's hard to write a program so as for it to be deterministic. And sometimes you just have no idea why your program backtracks and you pepper your code with cuts, hoping it will stop.

Which of course happens less and less as you get to understand how the language works (and why it's a bad idea to put cuts everywhere in the first place).

(It's a bad idea because it means you don't understand how your own program works. Nine times out of ten you don't need to use it when you think you do).

(But I still use the cut all the bloody time because it's convenient).


> Not because "unification gets confused".

I know that, and I know the difference between a red and a green cut. However, I was targeting my explanation towards somebody who is completely unfamiliar with the language.


But, saying that "the unification mechanism could get confused" and that developers "needed" to use the cuts to control it makes it sound like backtracking is somehow wrong or unwanted, or a defect in the language. This risks confusing someone who is unfamiliar with the language.


Here's a 60s attempt:

Prolog was (theoretically) a practical declarative programming language. You describe what you want done, and it "figures out" what to do, as opposed to imperative, where you simply tell the computer each step to do.

That's something of an exaggeration in practice, but yet, there was something to it. A typical program was actually doing depth-first search behind the scenes, so "try this, or else try this, or else try this" algorithms were very easy to express in the language.

As an aid to that, it had unification, which you can think of as pattern matching on steroids. You could write a rule like

something([A, [1, X]]) :- yada(A, X)

which would only match if the argument to 'something' was a list the second element of which was another list that started with 1, and so on. It's far more powerful than this, but it's been many years, and I've forgotten most of it.

As for the cons, implementations of the day were relatively inefficient, although Quintus was fairly good.

The killer problem, in my mind, though is that it didn't really deal well with backtracking through huge data structures. If you were trying to write a bit-blitting algorithm on large arrays, for example, and expecting to backtrack a lot, it just plain wasn't going to work well.

Not sure that helps much. I miss Prolog a lot, but it's hard to see a path forward for it. Even somewhat similar languages like Haskell will probably never get any real traction. Or even Ocaml/F#, which are far more practical.

I suppose the closest we have today are really good Makefiles. Done well, this captures just a bit of Prolog, in a very limited domain.


Why would you backtrack when doing bit-blitting?


In Prolog, backtracking is a fundamental feature of the language, and idiomatic code can do it places you might not think of.

Part of writing efficient Prolog is doing things to avoid such backtracking (by adding cuts or rearranging code).

Bit-blitting is just an example I made up for "large-scale manipulation of array data". There are such cases where backtracking would likely make more sense, though I'm not thinking of an obvious one right now.


The reason you have trouble thinking of an example is that there isn't one. If you know enough Prolog to write idiomatic Prolog code, you also know enough Prolog to think of the cases where your code will backtrack and where it won't. Because idiomatic Prolog code doesn't do unnecessary backtracking.


That. Prolog will backtrack to get all solutions to a goal, but there is no reason why a goal must have multiple solutions. Unless you program it that way yourself.


You may well be right. As far as I can recall, though, as of the last time I was regularly using Prolog, trying to do serious work on a (say) 1000x1000 array was a recipe for disaster.

The exception would be Turbo Prolog, but it was a rather restrictive subset of the language. Fast as hell, though.


While turing complete, it is essentially a DSL for doing logical deductions in a system of predicate logic. This makes it quite natural for certain classes of search and constraint satisfaction problems, but less natural for day-to-day procedural programming.

Further more, it is missing many things that you come to expect in a modern programming language - arrays, good numerical performance, etc.

The way it works though allows you to do some things quite beautifully. For example, since every function you right can be run either forwards or backwards (solving for any one unknown amongst the arguments or result), you can often write one single piece of logic for parsing that can just be run in reverse to do serialization (or vice versa).

The main reason, though, that it (or it's progeny like Mercury) is not mainstream is really the same as the reason that Haskell and its derivatives are not. It's just too different from the more widely known algol/simula langages, and the library ecosystem is comparatively poor.


>> Further more, it is missing many things that you come to expect in a modern programming language - arrays, good numerical performance, etc.

Those depend on the interpreter. For example, Gnu Prolog has full support for arrays. Swi-Prolog has good support for rational numbers. Constraint libraries like library(clpfd) in Swi-Prolog simplify numerical computations. etc.

About numerical support in particular, the problem is that representing arithmetic in first order logic is not straightforward because it doesn't have a concept of a "number"- so what a "number" is, must be defined before arithmetic can be done (and in fact, to some extent, this was the purpose of FOL in the first place). Additionally, Prolog slightly departs from first order logic semantics in that it doesn't have a clearly defined concept of "function" (by contrast in FOL "arguments" of predicates are variables, constants or functions). Instead, Prolog "cheats" and allows predicates to be passed as arguments to predicates. However, for performing arithmetic operations, Prolog defines a predicate is/2 that takes as arguments two "mathematical expressions", where an expression is either a number ...or a function (so a function; because constants are functions with 0 arguments). For example, to add two numbers you would write:

  ?- A is 1 + 1.
  A = 2.
This is clunky and messy and ends up hurting the eyes and the brain, even though there's no real problem with performance in particular (in most Prolog systems there's a low-level back-end that handles that anyway). The constraint library clpfd that I mention above exists to smooth out this impedance mismatch between the declarative paradigm and the wonky "functions in a language without functions" of mainline Prolog. It's billed as "declarative integer arithmetic" (and does the job well) (once you get used to it).

Anyway it's not that there's no good support for numerical computations. It's just ...weird. Even in the context of, you know, Prolog.


Say you want to calculate the area of a triangle, in a Python-style language you would write:

    def area(base, height):
        return (base * height) / 2
This is a function which takes the base and height as inputs, and returns the area as an output, and that is all it can do. You'd never think that's "all" because what else would you expect it to do, but what you wrote down in code? In Prolog, instead you can write this:

    :-use_module(library(clpfd)).

    triangle_area_sides(Area, Base, Height) :-
        Area #= (Base * Height)//2.
This is not a function, it doesn't take any parameters or return anything. It's a rule which says how the Area, Base and Height values are related, and an import for a module which handles numeric calculations. Now you can ask the Prolog rules engine to solve for any missing value(s):

- Given a base and a height, what is the missing Area value? (Same as the Python version)

- Given an Area and a Base, what is the missing Height?

- Given an Area and a Height, what is the missing Base length?

- Given an area, a base and a height, is that a valid triangle?

- Given an Area, generate some / many Base and Height combinations which make a triangle of that area.

- Given a Base, generate some / many Area and Height combinations which make a triangle of that base length.

- Given a Height, generate some / many Area and Base combinations which make a triangle of that height.

- Given nothing, generate some valid triangle sizes.

- Given some values but not others, can it be solved at all (true/false)?

Pros: that's much more capable than the Python function of roughly the same code length. Multiple logic statements like this can make a lot happen in a little code.

Cons: it's very different from imperative programming, it makes IO and state interactions a bit more difficult, it's easy to get the Prolog engine stuck generating infinite combinations or searching enormous search spaces and never finishing, debugging what it's actually doing and what is going wrong is a very different skill from imperative languages.

And years of college courses using Prolog as a "force them to deal with recursion", leaving people with the impression that Prolog is incapable of anything good, and what it can do is slow and needlessly difficult.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: