Hacker News new | past | comments | ask | show | jobs | submit login
Why Learn Prolog in 2021? (dstrohmaier.com)
260 points by triska on Jan 5, 2021 | hide | past | favorite | 163 comments



I'm not convinced there's great utility in smart contracts, but if there is, I think there's a huge utility in contracts being declarative and statically typed, to avoid many of the problems we've seen with existing contracts. In that case, a statically typed Prolog dialect would be a good starting point. The contract would be a set of declarative rules describing acceptable next states of the contract. To make the contracts verifiable in linear time, the submitter would submit the next state of the contract, plus a compact binary representation of the path taken through the rules set, so no backtracking would occur in the verifier. You could allow recursion, as verification time would still be linear in the size of the submitted compact path representation, just not linear in the size of the contract. If you disallow recursion, then verification would also be linear in the size of the contract.

Granted, many of the problems with Ethereum Solidity contracts are more to do with all of its use of implicit behavior (in a misguided attempt to hide the complexity of contracts) rather than directly consequences of Solidity being imperative.

Here's a quick plug for Mercury[0], a statically typed dialect of Prolog with an optimizing native code compiler. Supposedly it's 5 to 10 times faster than commercial Prolog compilers or available interpreters.

[0] https://github.com/Mercury-Language/mercury


Lira[0] and its readable paper[1] is a good example of abstracting smart contracts into a statically typed, domain-specific language that describes the contract precisely at a high level. It's not Turing complete, which works for a large class of contracts (for instance, see the American and Asian options examples in [1]).

One concern with logic programming is cost of computation, on Ethereum every transaction has a gas associated with it and so you can't run computations that go over the gas available in a block.

Turner's ideas of Total Functional Programming[2] might have application in the smart contract space as well, since you disallow general recursion but allow structural recursion, you can likely precalculate or bound gas costs accurately ahead of time.

As for being statically typed, I completely agree, Solidity's poor design choices contributed to millions of USD in loss (e.g. DAO hack) because the developers were not able to easily reason about the implicit behavior or concurrency model.

[0] https://github.com/etoroxlabs/lira

[1] https://bahr.io/pubs/files/bahr15icfp-paper.pdf

[2] http://www.jucs.org/jucs_10_7/total_functional_programming/j...


> One concern with logic programming is cost of computation,

Right, but the client executes the contract, keeping a trace of what needs to be computed by the verifier. The verifier doesn't actually execute the full contract, just verifies that the trace was faithfully executed. If we have

  let R = (A() || B() || C() || D()) && ! E();
  R().
If A is costly, but true 99% of the time, but this transaction is one of the 0.001% of the cases where D() is true, the contract verification trace says to execute D(), and the verifier never checks A, B, or C. See my nearby comment for a worked out example of a compact trace representation for a deeper disjunction decision tree.

Effectively, because declarative languages don't dictate order, the client is free to re-order the contract execution order to be optimal for this particular execution, without altering semantics. Declarative semantics, are by definition, independent of execution order. This makes efficient compilation and execution more difficult, but makes verification faster (if the verifier is provided with an execution trace).

Now, you could potentially do similar optimizations with Solidity contracts, with a suitably modified EVM definition, but if the execution order is up to the runtime/compiler instead of dictated by the source code, then you've by definition changed the language to be declarative.


> DAO hack) because the developers were not able to easily reason about the implicit behavior or concurrency model.

I believe the DAO bug was a reentrancy bug, not a concurrency bug. The code was not written to be reentrant because the developer didn't realize recursion could be triggered via implicit behavior. Many reentrancy bugs are concurrency bugs, but I really think that's not the case with the DAO bug. I saw one proposed fix (not sure if it's the one that got finally accepted) that used a reentrancy flag to prevent the problem and called the flag a "mutex", but it wasn't actually a mutex, adding to the confusion about the root cause.

I really don't think execution of a single contract transaction is concurrent, and due to (eventual) serializability of blockchain transactions, the blockchain acts as if concurrent execution within a contract across miners doesn't exist. If you have concurrent calls to a single contract, at most one of them will succeed, and those that fail will not affect the blockchain state.

In general, the Ethereum community seems to refer to serialized execution of contract methods in an order unexpected by the contract author as "concurrency", but I have seen no evidence that the effects on the contract state as reflected in the blockchain are not always serializable. In other words, it acts much like concurrent SQL queries under a serializable isolation level: if two concurrent executions modify the same data, one of them will fail instead of you getting an interleaving of the two write sequences.

It's possible that I misunderstand the EVM, but it seems insane to design a system to allow multiple threads to execute within a single contract at a time in the presence of shared mutable state, at least without an RDBMS-like isolation system in place.

Edit: https://forum.ethereum.org/discussion/1317/reentrant-contrac... supports my understanding.


You should check out the Digital Asset Modeling Language, DAML. [0]

They've built a language for distributed ledger platforms based on Haskell with defined state transitions based loosely on traditional contract law. For exactly the reasons you've mentioned, this makes modeling the participants, rights, and obligations of a smart contract use case incredibly efficient.

Whether smart contracts are useful or not remains to be seen. There seems to be a lot of potential in the finance and supply chain worlds.

When thinking about DAML it makes me wonder how impactful something like Cobol was in reality. Definitely found use and even long term value add, but transformational? I don't know.

I'm not even sure what the technology comparison should be for DLT without DAML. There's only so many use cases or niche areas where it's valuable.


I presume DAML is based loosely on SPJ's "Composing Contracts" paper. A few years ago at work, I was involved with a compiler from a DSL loosely based on SPJ's Composing Contracts. The compiler is a transpiler from contract descriptions to Cuda/OpenCL/ISPC code that runs a Monte Carlo simulation to calculate theoretical present value of the contract. The use case is complex derivative pricing and risk management.


Interesting point. The biggest counter-argument I see is the need for fine-grained optimization of gas usage. Especially given the crazy high fees as of late.

Hand-tuned imperative C enjoys a performance advantage over functional and declarative alternatives. So I’d imagine that imperative smart contracts are inherently easier to optimize than Prolog-style contracts.


While I prefer declarative languages, it is also interesting to note very simple imperative languages are often the easiest to verify formally (as transition systems).


The thing is that declarative contracts separate execution from verification. You don't actually execute the contract in the miner/verifier. The client executes the contract and records a trace of the subset of the program necessary to verify execution. The miner/verifier just uses the trace to verify that the presented state is correct (top-level contract rule evaluates to true).

In the simple and common cases, there ends up being no difference between execution and verification. However, if you have a contract that has

  let F = (A() || B() || C() || D()) && ! E();
  let R = F() && (G() || H());
  R();
with an imperative contract that gets executed on the verifier, you need to optimally order the clauses A-D, taking into account the cost of each and the percentage of the transactions in which each one is true. The client isn't allowed to re-order the clauses in the contract. With a declarative contract that is executed client-side, the client tells the verifier exactly which one of the terms A-D needs to be evaluated and which of the terms G-H needs to be evaluated. Let's say G and D are the lowest cost functions that need to be evaluated to verify our transaction. If the execution trace is the child index taken in depth-first traversal of the tree of logical disjunctions in the contract expression then the trace indicating that D and G need to be evaluated is [3, 0]. This can be compactly represented as a single (potentially large) integer in a mixed-base number system. (In our case, the bases are 4 and then 2, so the single integer representing the trace is 2*0 + 3 = 3. The verifier first hits a 4-way disjunction, and 3 mod 4 is 3, so it only evaluates the 4th branch. 3 div 4 is 0. It next hits a 2-way disjunction and 0 mod 2 is 0, so it only evaluates the 1st branch.) If you order the logical disjunctions in your contract so that in the common case, the leftmost alternative is always taken, then in the common case, your execution trace as a mixed base integer is 0. With estimates of the probabilities of the branches, you could use an asymmetric mixed base number system, similar to Facebook's zstd compression to optimally represent your verification traces.

With a declarative contract, the cost of suboptimal ordering is borne by the client and not by the verifier.

With EVM, there's no separation of contract execution and verification. The verifier/miner needs to execute the contract at the request of the client. If you modify the EVM and contracts to keep track of what's provably side-effect free and allow the client to specify reordering of those terms, then you've by definition created a non-imperative language or sublanguage. In that case, it's much safer and easier to design the system from the ground up to have semantics that are invariant under evaluation order (that is, declarative semantics).

Most of the cost of declarative program optimization vs. imperative program optimization (deciding an optimal order) is borne on the client side. Due to the structure of the contracts and the traces, it's trivial to prove that portions of the contract don't need to be executed in order to verify the transaction.


Similarly down this path I found Factor. A different approach but supports backtracking. https://factorcode.org/


I'm familiar with factor. The thing about factor is that its execution semantics depend on execution order (i.e. imperative), which makes it impractical / unsafe to allow clients to provide verifiers with an optimal execution order for verifying a contract.


How do you guys think about the opportunity cost of learning dead/dying/new/unpopular languages? Even with newer languages that are gaining popularity and are likely to be used in the future, I struggle to justify the time investment. I could be wrong but some of my concerns are:

- my time is better spent getting deeper into some more popular language that I already know to some extent

- unused skills deteriorate with time so by adding a new language that you don't need professionally right now, you also need to add the future time and effort of practicing and maintaining that skill

- I could be wrong but I feel that it's a red flag to have too many of the more esoteric languages on your resume.

I do enjoy tinkering with a new language but very often it just feels like a distraction.

Thoughts?


Learning another language with basically the exact same mental model behind them is not as worthy to spend time on. But learning a language with an entirely different paradigm is really eye opener and will make you a better programmer even if you’ll never write a line of code in it. For example, learning Haskell is a really good way to learn about FP and then you can easily apply it in a not strictly FP language.

Prolog is interesting because it is really niche, and “learning it” as in solving a few easy problems will definitely not take much time but will add another way to think about a problem to your toolkit.


See my post in this discussion "You can write <strike>FORTRAN</strike>Prolog in any language". One subset of Prolog is a nice concise declarative notation for rule matching engines.

Back when I worked at Google, one of my colleagues (Jeremy) did some Bayesian inference on "1% experiment" results, where he needed to see which observations matched a complex hypothesis and complex rules for the context.

So, Jeremy found it easiest to think about the problem in terms of Prolog, and wrote some Python objects whose constructors defined these rules. He also whipped up a memoizing backtracking engine that effectively lazily generated a DFA over experiment observations.

He talked and thought in terms of Prolog, because that's the mental tool that he used, but the programming tool was Python.

Interesting guy, that Jeremy. He used to do cacao price forecasting for Mars (Mars Bar/M&Ms). He also enjoyed craft beer and carving pumpkins with chainsaws.

We also worked in the same (NYC) office with Fergus Henders, who in grad school wrote Mercury, a statically typed dialect of Prolog with an optimizing native code compiler.


I agree.

For me it also came with a huge down side. I spent the last six to seven years using Scala, learning FP concepts, getting familiar with Haskell and it's concepts as well as other functional languages. This is after 7+ years prior doing lots of typical Java, Javascript, PHP which I was quite happy to use.

Now I'm in the situation after learning languages which provide far more useful day to day principals, abstractions and safety I struggle or rather get frustrated very quickly when I go back to C style languages i.e Java, Javascript, PHP, Python.

My huge problem now is I'm in Sydney, Australia. 4-5 years ago there was a reasonable strong enthusiasm for Scala and functional programmers in general. No, I've been looking to change jobs for the past year and absolutely nothing has popped up.

It seems the only roles available are traditional Java with Spring Boot, Javscript, Python and C#.

Even with DevOps stuff which I have a bunch of experience for everyone want Ansible when after discovering NixOS/Nix it completely removes the need for Ansible giving declarative deterministic infrastructure/deployments without fragile line upon line of YAML files.

Of course all the functional programming work I've learnt is usable and can be applied everyday in other languages. Functional programming isn't new predating all the new shiny popular languages. The problem is having used better languages which treat these principals as first class it isn't the same as trying to hack together something in a language never designed to support it.

It seems I've now learnt myself out of a job. If I want a job I need to take a step backwards and use tools that now frustrate me and seem clunky / broken. It's at the point where I'm starting to look at a career change outside of software to start enjoying work again.


> 4-5 years ago there was a reasonable strong enthusiasm for Scala and functional programmers in general. No, I've been looking to change jobs for the past year and absolutely nothing has popped up.

I think businesses have weighted the power/capability of scala against the relative lack of easily hireable programmer for them.

If a business makes themself dependent on one (or a few) key scala programmers, they are more prone to be held hostage (e.g., the power dynamic is more towards the scala programmer). They will have to pay more salary or face the risk of the programmer leaving (which affects business continuity). The scala stack doesn't automatically give a more competitive edge against the competition, however (tho i would say it does, but only if the entire org buys into it).

Therefore, it makes more business sense, esp. for a middle manager responsible for hiring, to hire a java programmer (which is basically a dime a dozen at this point). The business will face no risk of the programmer holding the business hostage (because they are easily replaceable). So that's why you see businesses start to not adopt scala.


Why did they adopt Scala in the first place? I remember around 2012-2014 it was the hawtness. My guess is that there are languages that give you much of the power of Scala without the complexity - Kotlin, and modern Java, having caught up.


I worked at a Scala shop in 2011. It was pure hell. Slow compiles, bad tooling (the Scala plugin for Eclipse was a slow as molasses, IntelliJ's was a bit better though), inexperienced developers. I asked why they chose Scala... It was obviously immature at the time. "Joe thought it was good." (Joe was the architect.)

I'd never touch it again.


It's changed beyond recognition since 2015 let alone 2011. Seems odd to base your judgment of it on an experience 10 years ago.


My other experiences at that company definitely put things in a bad light. I should take another look at it, you're right...


> Why did they adopt Scala in the first place?

I think in the case of Scala many companies became invested in Spark and Scala came along for the ride. Then as time went on, it turned out you could do advanced FP in Scala, despite its warts, with principled libraries like Cats and the Typelevel stack (see https://typelevel.org/cats/). Companies could then use Scala as a "better Java" or squint-and-it's-almost-Haskell. Or like you said they could just ditch Scala and use something less powerful but easier to learn and work with.


Kotlin is going down the same path, with JetBrains trying to create an eco-system of its own.

I bet in a couple of years, as the Java VM platform keeps improving, it will only matter to those stuck in Android.

This is a movie with a script I have seen multiple times.


Modern Java is a language I'd be happy to program in, but doesn't have the language constructs you need if what you're loving FP in Scala (easily chainable monad-things). Kotlin is a nice improvement on modern Java, but feels closer to Java than Scala. Arrow is interesting in this space: https://arrow-kt.io/


Scala got huge because of Spark.

I'm a Scala programmer due to Spark. I love it.


Indeed. The only recent Scala job's I have seen in Australia have been Spark jobs because Spark uses Scala. Even these are diapering for Python Spark roles.

I've worked on a few big data projects for large companies. I have a huge dislike for them as mostly seem about how can we fuck over a customer to better the business or do shady stuff with the data rather than anything meaningful.

That and it's mostly just ETL, aggregating various data sources, cleaning, putting the data in some central location then running queries on it and the "If the only tool you have is a hammer, you will start treating all your problems like a nail." problem, tiny data sets going from NiFi / Hadoop / etc when `cat ... | awk ...` will be more than sufficient.


Yes for us Spark is being used for just ETL. It's internal data though. It's at the scale where no single machine could ever handle it though. It's replacing large proprietary systems like mainframes and teradata.

We wrote a framework to heavily simplify writing these ETL flows. No developer of the ETL flows is using Scala unless something custom is required. I wish we could opensource it.


>I've worked on a few big data projects for large companies. I have a huge dislike for them as mostly seem about how can we fuck over a customer to better the business or do shady stuff with the data rather than anything meaningful.

Sounds similar to the AI hype these days.


My theory is that Scala was adopted because Java stagnated. Java isn't stagnant anymore (even if its behind other languages) and a lot of people I know that advocated Scala are just fine with Java now.


I strongly identify with this. I lucked into learning pure, typed FP at work and I can't go back.

I think there are professional opportunities doing FP in Haskell/functional Scala, though they are few and far between and in certain industries such as fintech or other areas where correctness matters more. Your best bet is to look for a remote position and keep your eyes peeled (at least, that's my strategy!).


As someone who works for one of the companies in Sydney that happened to use Clojure (one of the reasons I joined them), I know find myself in a strange situation where as a hiring manager and a leader of team of engineers that have little or no interest in FP languages that I can't justify advocating for FP languages.

a) hiring is way too difficult at the best of times and is compounded more when you filter the pool so small (admittedly the filter is not bad for some measures).

b) so few at the company have any interest in using languages outside of the more mainstream; for example it was no effort at all to get interest in Golang.


Haskell is the gold standard for languages that get you to think differently. Forth is another highly unconventional language.

Other less exotic languages can still be worth taking a look at to see what they do differently. Ada, for instance, is a plain old imperative/OOP language at heart, but it's different enough in the details that it could be worth looking at if you already know C/C++.


Back in the University I've had an assignment in Prolog that I've completely forgot about, so night before the deadline I've sit down around 9-10pm with a huge cup of coffee and by 4am I've learned enough Prolog to finish it (it was a simple path finder for the given graph), I even had some time left for a nap that night. And trust me, I was more than average programmer at that moment - had a lot of enthusiasm and energy back then, but not that much previous experience and I've literally never seen Prolog before. So if you're interested in playing with a quite unique syntax and a mental model of Prolog, just go for it, it will not be a huge investment of your time. You can easily cover the basics in a couple of weekends in your leisure time, instead of Netflix or something.


Prolog isn't dead, it's just resting. Actually, people find practical uses for it still though it tends to be domain specific.

Look at the alternatives. If there's a practical alternative to the language/material that has reasonably widespread use and can be counted on to be present in 5-20 years, go for the alternative. If you have a need for something that's been put through its paces for 40 years and is battle tested with well-supported compilers, consider the "dead language".

And you can always leave languages off your resume.


There are living languages based on Prolog syntax, like Mercury. I think they're very useful if you want to build something like an expert system or a game rule engine.


There are a great many languages that are often called “dead” that see rather active use in very specific domains that are called dead for not being general purpose.

By the standards of “dead” frequently employed to point out that, say, Hasell is dead, R, Wolfram Language, and Fortran would be considered “dead” as well.


I have a feeling that Python has stepped over R's territory, right?


I am learning Prolog because I think I have an idea that I believe might actually be easily solved by Prolog instead of hacking it in some other language. I personally believe in learning paradigms of programming as far more important than languages. Prolog presents the very unique logic programming paradigm. Learning new paradigms always feels a bit mind-bending to me.

Edit: Actually, my last statement may be only in hindsight given that any experienced programmer will know or know of languages like C and Python that aren't mindbending, so one's viewpoint gets slowly skewed. I think that's why sometimes beginners find things like Prolog or other "strange" languages interesting or even easy, and that's because they present a paradigm that is more like like how humans think about certain tasks. Whether it's Prolog and logic, Elixir/Erlang and messaging, F#/Ocaml/SML with pattern matching, etc., these paradigms are actually quite natural to humans who haven't been tainted with paradigms like that of C, which takes on the paradigm of "let's think like a machine".


If you're interested in learning many paradigms, the Oz[0] language, Mozart system, and accompanying textbook[1] were designed as a teaching tool to teach most of the major paradigms in a single language. I have not sat down with them, but they have been on a far back burner for years.

[0] https://en.wikipedia.org/wiki/Oz_%28programming_language%29

[1] https://en.wikipedia.org/wiki/Concepts,_Techniques,_and_Mode...


Definitely! It's been on my backburner for years as well. I have the book and started to take the edX course a couple of years ago, but got sidetracked with work. I e-mailed Peter Van Roy back in May, asking if the edX course would be resumed. He stated that the local university course got removed from the engineering curriculum, so that's why the edX course isn't running anymore. Hopefully it comes back. He mentioned that he had recorded some lectures on his YouTube channel.

https://www.edx.org/course/paradigms-of-computer-programming...

https://www.edx.org/course/paradigms-of-computer-programming...

https://www.youtube.com/user/PeterVanRoy


Super glad to see this!

Programming Paradigms for Dummies (https://info.ucl.ac.be/~pvr/VanRoyChapter.pdf) is something I try to get everyone I know who's interested to read and really think hard about.

The textbook is the only real textbook I've bought after university because it's so... foundational.

I like to refer to programming paradigms as the building blocks of design patterns -- how do you derive design patterns and best practices? By trying to bring programming paradigms into your design! Our practice of immutability can be viewed as a means to make data flow more deterministically.


I see it's missing array programming (APL, J, K... etc) and concatenative (Forth, Factor, 8th) programming.


Prolog syntax will always be with us, due to its close relation with first-order predicate logic. This syntax will still be there after many now popular languages will long have been forgotten and replaced with other languages. In this respect, learning it is a safe investment.

Regarding opportunity cost of time investment, also take into account that, even though learning a lower-level language will first save you time in comparison to Prolog, most if not all tasks you solve with lower-level languages will take longer than solving them with Prolog would take.

So, investment in Prolog is like an investment with compound interest: It first takes time, and then you save more and more time, freeing you for other activities.


Learning Prolog is less about learning the particulars of the language than about learning the logic programming paradigm, which is timeless and universal.


"You can write <strike>FORTRAN</strike>Prolog in any language."

When I first started at Google, the 1440 Broadway office in Times Square was pretty crowded and they stuck me in where there was a desk, and I ended up sitting next to a guy (let's call him <strike>Mel</strike>Jeremy, because that's his name) who did "1% experiment" analysis. One day, I asked Jeremy what he was working on, and he said he was writing some Prolog to check some hypothesis, and he invited me over to have a look.

Really, he was writing a complex rule matching engine (behind the scenes lazily constructing a DFA that operated over experiment observations instead of characters) in Python, to be able to sort sets of observations and contexts into those that fit a hypothesis and those that didn't, as part of some Bayesian inference.

But, it was easiest to think of the experiments observations as a facts database and his complex hypotheses and complex contexts as Prolog rules. So, in this case, Prolog was a mental tool, not a programming tool, and he ended up implementing a tiny tiny subset of a Prolog engine that used Python object constructors instead of Prolog syntax.


Note that when you ask a question like this, you will get a biased selection of answers. People who love niche languages for whatever reason will be more than ready to answer these sorts of questions. Passionate people are great, but you often are in need of some fresh opinions on the other side.

Here are some that helped me out immensely (I used to love Haskell but now I love it adequately):

https://prog21.dadgum.com/54.html

http://comonad.com/reader/2014/letter-to-a-young-haskell-ent...

New technologies can be a psychological trap.


Learning an esoteric language is as valuable as learning anything else: it’s as challenging and as mind opening as you’re willing to stray. If you’re just exploring another language with the same paradigms you’re used to, you’re just playing in a sandbox.

If you’re trying something that feels novel, that has different primitives and constructs and workflows, you might learn new ways to think about using your more comfortable tools. You might even come to have different standards and expectations of how your peers use those tools.

I’ve become a much better developer from using somewhat esoteric languages and bringing insights back from that. I learned how to reason about and organize stateful code by doing FP, I learned how to write better interfaces in cross-paradigm languages by just reading a lot about Haskell, and I learned a ton about how to handle branching logic and databases by a foray into Prolog/logic programming.

Which by the way isn’t dying, and has been breathing life into more common use languages. That’s the benefit of learning new languages. What they do differently can help guide the mainstream too.

Here’s some ideas that have been percolating into ecosystems that didn’t have them:

- immutability

- static types

- composable functions

- patten matching

- minimal contracts/dependency control inversion

- provability/static validation

- static runtime optimization of dynamic patterns

These are all because people spend time with tech that’s uncommon. Maybe you don’t need it on your resume. But having it in your life experience is going to make you better at your dull day job.


I get your point, but Prolog is by no means an esoteric language. It (or derivatives such as Datalog) has been the lingua franca for logic and constraint programming since about the mid-1970's, is an ISO-standardized language, and has more implementations than most programming language out there, including shells, JavaScript, BASIC (but probably not as many as historic LISP derivatives lol), has inspired the syntax of Haskell and Erlang, has been used to prototype languages (Erlang, again) due to extremely convenient built-in parser construction capabilities (operator precedence, recursive descend), etc. It's used behind the scenes in many, many apps, including financial optimization/allocation apps, logistic scheduling, or other discrete optimization, config management, combinatorial test case generation, graph/logic DBs including the so-called "semantic web", controlled language applications and other non-ML natural language apps, medical and system diagnostics, medical and legal taxonomies, formal specification/verification, simulation, DSLs, parsers and compilers, type inference and other decision problems in programming languages, layout algorithms and data representation in hypermedia formats, modelling in database theory including variant CAP and OT approaches, games, etc. etc.

Sorry for ranting and unfairly picking your post for replying, but I'm a little bit disappointed to see the clueless and roundabout reaction of HNers here who, rather than sticking to the topic of Prolog, see stories like this as an invitation to advertise their fringe language, their not-quite-Prolog language, or their personal pet peeve.


Yes, thank you for explaining that. I meant to add a note that Prolog isn’t esoteric too, but I was getting close to bed time so left it at what I did write for the benefit of GP’s (and others’) horizon expansion.


In my experience, having a variety of programming languages on your resume is a double-edged sword.

On one hand, it often impresses or makes you more desirable in the eyes of the programmers who do your initial phone screen or whiteboard interview. They like seeing that you're curious and passionate about the field.

If you explore languages that they haven't explored yet, then they might be excited about the chance to learn something from you. If you've explored the same trendy language they're already exploring, then their ego might feel validated by seeing others on the same bandwagon.

Either way, they're probably frustrated by various examples of not having buy-in from management or architects to do various things they'd like with the existing codebase. So they might see you as the right kind of mindset, someone hungry who will be an ally in pushing for improvements.

On the OTHER hand, it will often worry managers and architects. Who have enough stress on their plates dealing with the business stakeholders, and don't want to hire another kid who will throw a temper tantrum when they're not allowed to re-write the entire platform in Rust.

So if you DO decide to put various "personal interest" languages on your resume, then make it a point during phone or F2F interviews to highlight your professional maturity. How you understand the need to balance risk. In other words, that you explore languages on your personal time to make you a better developer, not because you really have an expectation of using Brainfuck on your employer's projects.

Even with that said, by the time you're 5+ years in the industry, I'd be a little wary of putting too much on your resume that isn't genuine professional experience. I include Lisp and F# on my resume just for the occasional conversation-starter... but if I really had a list of 12 hobby languages, then I would probably try to mention that in the interview, rather than listing them all on my printed resume/CV.


Some context on my personal path followed by my considerations:

I've been trying to keep a professional career with parallel academic activities. I.e., I did my masters and Doctorate while working. For both my dissertation and thesis I worked in domains and problems that lend themselves to logical formulations - and most of the implementations I did were in Prolog. I also had minor professional experience with Prolog. Focusing here just on the learning-the-language parts and disregarding all other activities:

* I _absolutely_ could have used that time to learn a more marketable language or framework. So that was a tradeoff, but -

* I most likely would have put only a fraction of those hundreds of hours into "marketable training". I'd have played games for most of them. Or just browsed history channels on youtube, etc.

* I did not feel like any practical skills deteriorated at all. In fact -

* I feel like learning Prolog in-depth was a valuable "exercise in learning". It might have helped me learn hard stuff for my job by making me a better independent student

- I hardly ever interview but I usually let Prolog be at most a footnote in my resume. I don't expect it to be useful in that way (but who knows?)

My takeaway would be - for marketable purposes, learning Prolog is only substitute for meta-training. Like, instead of reading another (not your first!) book about design patterns or asynch control or (...) you could learn Prolog.

But 85% of the benefit in learning it is the intelectual stimulation.


(speaking for myself)

I focus on why a language looks interesting and worthwhile to learn - i.e. what new ideas are in it that are worth stealing.

Over and over, I've found that grasping the new ideas in one language almost always permits you to use it in another more "mundane" language. I've for example used C++ functionally, brought Objective-C and Erlang ideas into C++ and so on when I was dominantly programming in C++. During that period, I also made a scheme-dialect interpreter in C because for some of the things we wanted to do C/C++ were too hard boiled.

More recently, I've brought CSP into Javascript through sweet.js so I can write go-routines in JS, and along the way also brought Mozart/Oz's "data flow variables" a.k.a. promises as native to JS (again through sweet.js).

I've also run a short fun "course" at my company called "no spoon" where the participants build a stack-based language in JS and implement many defining features of many other programming languages .. including all the above.

So what I'm trying to say is that you don't need to lament that you can't use a language that you learnt in your day job. New ideas in languages are always worth learning no matter what language you work with on a day-to-day basis.


You will not make more money because you learn Prolog. I'd approach it like a hobby. Some people like collecting stamps. Some like pondering esoteric programming languages. With things like Prolog, you are closer to Nature, so to speak. Or maybe more mathematical. Learning about a lot of "modern" technology is a lot like memorizing minutia about the semi-arbitrary choices that have been fossilized and accumulating cruft over time. YMMV.


> You will not make more money because you learn Prolog.

I think you might if you use it properly. I definitely don't think you'll earn less money - it doesn't take enough time for that.


Getting deeper into a language you already know has value. On the other hand, learning a new way of thinking about programming also has value. The ideal might be to be T-shaped: deep in one (or a few) languages, but broad at a shallow level.

In this view, you don't have to spend much effort to keep up the skill over time, since you only knew it at a shallow level to begin with.

If you're worried about too many esoteric languages on your resume, don't put them on your resume. You are under no obligation to state everything you know on there...

Is it a distraction? Maybe. Should you make it a serious project? Perhaps not. But there's nothing wrong with stopping and smelling the roses (and the rare languages) every now and then.


Prolog is declarative instead of imperative. Playing with just a toy example will be eye-opening. You don't need to add the tool to your daily tool chest, but rather expand your horizon.


Personally speaking, if you're interviewing and you discuss your interest here, it tells me you think a lot about programming and your own thought patterns, with a curiosity about what's beyond what's "dreamt of in your philosophy". As long as you don't expect us to let you rewrite a whole service in it just to make you happy, these kinds of interests generally score you big points with me, conveying that you are a programmer who is eager to learn and try hard things.


Dead/dying/new/unpopular doesn't matter too much. Does it work, and does it apply to problems want (or need) to solve matters much more. If it's dead, it may not get updates, so evaluate it based on what it is now and what you could conceivably add to it.

Learning a new language gives you more ways to understand languages you already know, so it has some value even if you don't use it professionally.

Unused skills deteriorate, but it's often easier to learn things the second time.

I wouldn't put too many languages on a resume. It's always relevant experience, relevant education, relevant knowledge. That means you can omit things if you don't think they're using BrainFuck in production, or you don't think they'd apprechiate the classy name. Maybe make a point of putting your favorite esoteric language just in case, but leave the others out.

Being able to pick up languages as needed is also a skill. At my last job, I was hired to do PHP work, but I learned Erlang on the job, as well as Applescript (ugh), got a lot less bad at C, wrote some perl and shell, and even had to write some python from time to time. And or course, I had to debug other people's Java, but didn't write any. I haven't seen a lot of positions where they explicitly hire for that, but it's sometimes what's needed.


It kinda depends on your goals I think. The T shaped developer skillset is extremely, extremely valuable and will only probably continue. Sure, you could get deeper in a language you already have experience with, but do you think that will better prepare you for future experiences in different languages? Do you want to stick with the tech you're currently using for the rest of your career? If you do, that's great! Don't even worry about the other stuff. But I can say, any time I've spent in my career learning something new has never been wasted time, even if I never used the tool again.

Sometimes it's also fun to just see something different. There is value in "Play". You might find you actually really love the new thing, or you might find the new thing just totally reaffirmed your love for your current tools. Either is great! But finding out is probably more valuable.

For what it's worth, to your question directly, I found using Prolog to be a pretty profound experience with respect to how I thought about programming. I haven't touched it in ~12 years, but while I was using it I thought it was incredible. One of the "funnest" learning experiences of my career was building sudoku solvers in Prolog.


If all you see with languages is the dollars you can make with an hour of time then learn whatever the next COBOL, C#, Java or Python. If you are focusing on solving problems, then learning another language, especially one that allows you to approach problems differently makes a lot of sense. I haven't used Prolog in anger in about 20 years. But the experience with declarative, logic programming has carried over to nearly everything I've done (you'll see state machines differently and see other logic patterns you can generalize). The resume isn't the be-all end all. Programming is a portfolio profession. You can prove your skill by showing your work - even if it's just some toy you wrote for fun on the weekend.


I'm curious about this as well, though in a more general sense than just PLs.

There are already so many modern/useful things I want to learn that it's hard to justify learning something because it's 'beautiful' or purely intellectually stimulating.


> hard to justify learning something because it's 'beautiful' or purely intellectually stimulating.

You might be surprised to find out that "because it's 'beautiful' or purely intellectually stimulating" is the essence of the 'Hacker' ethos for which this site is named.

The Y Combinator itself is something that fits this description quite perfectly. You will never use the y combinator in practice, and it likely won't help you to get a job ever, but it's a beautiful, wonderful thing to learn about and play with.

I'll be the first person to criticize PG on a range of topics, but to his credit he has been a great proponent of this ethos and it is no coincidence that this site is named after what would be now considered esoteria by many of the commenters in this thread.

It's predictable that the VC ethos of SV would undo that of the Hacker spirit that built it, but it's still sad to see.


> You might be surprised to find out that "because it's 'beautiful' or purely intellectually stimulating" is the essence of the 'Hacker' ethos for which this site is named

Is it? I thought hacking was more (Or at least also) focused on exploration and making things work [0].

A Y-combinator doesn't really seem to fall into "'beautiful' or purely intellectually stimulating" either [1]. It seems like something that has practical value, as it enables recursion.

I also don't think all these things are mutually exclusive. I find consensus algorithms to be quite fascinating and they also seem to have massive practical value, which is why I want understand them more deeply and perhaps implement one.

[0] https://en.wikipedia.org/wiki/Hacker_culture

[1] https://stackoverflow.com/questions/93526/what-is-a-y-combin...


Personally, I probably would have quit entirely had I limited myself to only learning “practical” things.


I think I'm the opposite. If I can't see the practical application of something that I'm learning I generally lose motivation very quickly.


I think there's value. I learned a tiny bit of prolog while looking at semantic technologies, and I found it really interesting. It's an interesting query language for arbitrary data; an alternative to GraphQL that creates "links" by establishing facts about bits of unstructured data.

Is it worth becoming an expert? No. Is it worth spending a day or a week playing with? Most definitely. I have a few niche things I've picked up like that. Learning a little bit of Lisp was similar, although I'd like to learn more of that some day.


Maybe learn something other than a programming language like algebraic geometry, dunno I feel learning only "computable" things is very restrictive because we live in a hyper computable word.


If you're optimizing for a career or something, then it's probably prudent to learn the latest and greatest fad.

But sometimes you learn for fun, or out of sheer curiosity.


I spent much more time learning Latin than Lisp and Prolog together. But at least Lisp was somehow useful, if I look at it from today's perspective.


My rough criteria:

- does the language have some name recognition and positive reputation?

- is it still in use (even if only in very niche areas)?

- do you have a chance to use it to solve a problem (for work or for fun) to which the language is well suited?

If the answer to all three is yes then I would have no hesitation.


To this, I personally would add as an "or" clause, "Does the language have something fundamental to teach me about programming that I don't already know, and which could be useful when brought to other domains?" This was the reason I learned FP, and also the reason I learned Prolog. I wanted to extend my understanding of programming beyond the C/procedural context. To that end, I found both exercises immensely beneficial even though I've never used a FP lang or Prolog to build an actual project.


That's a fair point. I think that is worth highlighting. But I wouldn't make it an "or" clause.

For context, I've been able to satisfy all the criteria when learning both Prolog and Erlang. They were both profoundly education experiences, but I think part of the impact of that experience came from using them to solve problems I had, and to which those languages were well suited.

There are many languages that will teach me important concepts, but time to devote to learning them is limited. I need a filtering/prioritization process. The trick is to just be aware that these other languages exist, what domains they are good for, and be ready to learn them when I have the right problem.


If you care only about job prospects and your career, then I would skip it. However, if you like exploring different ideas and concepts in computer science, it is definitely worth a look.


> How do you guys think about the opportunity cost of learning dead/dying/new/unpopular languages?

Interesting question. My advice would be as follows:

1. Learn one practical language really well, well enough that you can both immediately write correct code, with no IDE or documentation, to both algo style problems (hacker rank etc.) and the things that typically come up in your day job or hobby projects (e.g. if you do datascience and use pandas a lot, be sure you can do all common operations from memory; if you do web development, you should be able to churn out a simple REST handler in your framework of choice without thought). You don't need to rote-memorize the language's entire API surface of course, but you should be able to churn out all the frequently needed stuff without thought.

2. Learn one or two other languages less well, but well enough to cover the spectrum of stuff you need to do, efficiently. Relying on the IDE and frequently looking up stuff is likely fine. For example, everyone should know at least one scripting language well enough to automate things by gluing stuff together and if you need to periodically do some simple web UI for a one-off tool, learn enough javascript to do so effectively if not necessarily elegantly. If you want to do systems stuff you'll at least need average C skills and so on and so forth.

3. Only now that you covered your bases to do "realistic" tasks effectively, look at learning some more exotic languages. Otherwise you risk flitting from one thing to the next wasting a lot of time setting up a third rate dev environment for a weird language and hunting for some semi functional library that helps you achieve X and never working on anything meaty enough to really learn much.

But once you are productive and cover a range of tasks with mainstream languages, learning either maybe-up-and-coming or already-half-dead-but-interesting language can definitely pay off. Because if you make good choices about which languages to learn you can either broaden your horizons in ways that will pay off even if you don't ever use the language for "real work" or, if, you're lucky, you might have picked an up and coming language and be one of the small pool of people with any amount of experience with it, which will give you a strong competitive advantage. Don't allow your main language skills to grow dull (unless you are switching to a new bread-and-butter language), but there is no problem with learning X and then forgetting a lot of the day-to-day stuff about it, if you got some lasting enlightenment out of it.

Also: don't put esoteric languages on your CV if you just have toyed with them, only list things you have used professionally or done some non-trivial hobby project with (unless, maybe, you are looking for a job in "obscure language X").

One of the annoying things with people telling you to learn X because it will help you to grow intellectually is that they often don't really provide concrete examples, so let me provide a few suggestions:

1. Lisp: templating trees by bashing together strings (or CPP style tokens) is braindamaged. Having a compiler at runtime is powerful and useful. A real REPL (not Python, ruby, scala, ...) is powerful and useful. Most DSLs suck and would have been better replaced with a simple sexp format.

2. Smalltalk: You can have a syntax that's as basically simple as lisp but more readable. Everything is live, inspectable and modifiable and you can persist the whole state of the world trivially. This has dangers, too.

3. Python: syntax can, and maybe should, look like pseudo-code. By getting the pragmatics mostly right (e.g. slices, convenient dicts, convenient and immutable strings, mutation returns None, good error messages and repr, basic pseudo-Repl), you can make a very productive language without particular hardcore engineering skills or understanding of CS theory (or history). On the other hand these will also become real limitations at some point.

2. APL/J/K: there are more reductions, inner products and outer products that are useful beyond those involving addition. Array rank and broadcasting. The joys of right associativity. Function power (including infinite) and under. The effects (good and bad) of extreme conciseness.

4. C/C++/Zig/Rust: understanding ownership, stack, heap, pointers. Low-level thinking.

5. Clojure: cons cells as in lisp or scheme suck. So do traditional FP linked lists. Reducers/transducers, schema-language leveraged for property-based tests. One way to think about concurrency with immutability.

6. Erlang: you can have some very nice properties without a horrendously complex implementation, if you make the right engineering trade-offs. E.g. Erlang is pretty much the only thing that gives you preemptive multi-tasking with very high granularity. Pseudo-ropes are an interesting way to do strings. Supervision trees are a powerful concept and way to think about failure and failure handling in concurrent code. The value of deep production introspectability and in particular low-overhead tracing.

7. SQL: Data is king. The power of a truly high level language (even if flawed). Where the abstraction breaks down, badly (e.g. locking, or the DB switching execution plans under you). "Null" done right (true ternary logic, even if confusing is a lot better than the NaN crap in ieee 754 or null in mainstream languages). Why you still want to avoid Nulls most of the time. ORMs are for losers.

8. Ocaml: a proper type system can make writing certain types of code much less error prone and is possible without Haskell-level ivory-tower-wankery. There is value in having a somewhat simple minded but predictable compiler.

9. Zig: you can do almost everything C++ can do at a tiny fraction of the conceptual overhead.


Prolog implementations are too heavily reliant on the stated order of predicate rules in order to make execution progress. Many predicates are non-terminating or extremely inefficient when faced with goal inversion, but are often 'fixed' by simply reversing the order of some of its rules (but making it useless in the original direction in the process). This is disappointing when trying to maximize prolog's biggest potential: building true total relations that can project in any direction with a single definition.

There are technical reasons why the pursuit of term reordering is problematic, namely 'cut', which is a kind of out-of-bounds alteration of the program's execution. Previous attempts to get rid of 'cut' apparently ran into other problems. IMO the root of the issue is that prolog has no way to declare a "closed" predicate, and then optimize execution based on their presence. I think cut papers-over (heh) the lack of "closed predicates".


>> Many predicates are non-terminating or extremely inefficient when faced with goal inversion, but are often 'fixed' by simply reversing the order of some of its rules (but making it useless in the original direction in the process). This is disappointing when trying to maximize prolog's biggest potential: building true total relations that can project in any direction with a single definition.

I don't think that's a commonly agreed-upon "biggest potential" of Prolog. Prolog is a general-purpose language with the syntax and the semantics of the first-order predicate calculus, albeit limited to Horn clauses and with many other restrictions that are necessary to make for a language that can be used in the real world, for real programming tasks.

I cannot easily think of an example where changing the order of clauses (not "goals") in a predicate definition prevents the program from "runnign backwards". For example, take the typical append/3 predicate:

  append([],Zs,Zs).
  append([X|Xs],Ys,[X|Zs]):-
        fappend(Xs,Ys,Zs).
Moving the first clause, append([],Zs,Zs), after the second, still allows three calling modes:

  ?- fappend([a,b],[c,d],Xs).
  Xs = [a, b, c, d].
  
  ?- fappend([a,b],Xs,[a,b,c,d]).
  Xs = [c, d].
  
  ?- fappend(Xs,[c,d],[a,b,c,d]).
  Xs = [a, b] ;
  false.
And complexity does not seem to be affected. Can you give an example where this is not the case?

>> IMO the root of the issue is that prolog has no way to declare a "closed" predicate, and then optimize execution based on their presence. I think cut papers-over (heh) the lack of "closed predicates".

I, like tom_mellor, am also not sure what you mean by "closed predicates". Could you clarify?


> I cannot easily think of an example where changing the order of clauses (not "goals") in a predicate definition prevents the program from "runnign backwards".

It's bad enough to use "directions" for what are properly called "modes", but it's positively counterproductive to say that anything in Prolog runs "backwards". Prolog always runs top-to-bottom, left-to-right, and this is precisely why it's easy to write nonterminating definitions. As a community, we are doing ourselves a disservice by claiming that anything can magically run "backwards".

As for your question, the canonical example for clause ordering being a problem is where the incorrect order prevents generation of answers because a recursive clause precedes a non-recursive one.

Example:

    xs([x | Xs]) :- xs(Xs).
    xs([]).
This recognizes lists of xs:

    ?- xs([x, x, x]).
    true.
But it cannot generate them:

    ?- xs(Xs).
    ERROR: Out of local stack
Contrast with:

    ys([]).
    ys([y | Ys]) :- ys(Ys).

    ?- ys(Ys).
    Ys = [] ;
    Ys = [y] ;
    Ys = [y, y] ;
    Ys = [y, y, y] .


I think everyone knows what we mean about "running code backwards" and I was sticking to the OP's choice of language for simplicity. I don't disagree with your (nevertheless pedantic) observation, however. I agree we should be more precise in our terminology.

Anyway my bad, I guess you can think of many cases if you try a little. I apologise. I was not in my programming mind today.


Define "closed"?


Isn't Datalog the solution to much of that?


I didn't read this article but I did decide to use Prolog (which I didn't know prior) for this year's Advent of Code. I ended up solving the first 8 days (so, 16 puzzles) and it was fun.

What I got out of the experience:

1. Intellectual confidence - it's not often that we get to learn not just a new way of doing something but a new paradigm. I am 38, and while I try to challenge myself in all areas of life, how often is something a true mind-bender? The fact that I was able to wrap my head around (some of) the Prolog way of doing things proves to me that I "still got it" and is good exercise for keeping it that way.

2. One specific thing about Prolog is heavy reliance on recursion. I probably "get" recursion better than an average CS grad but this took me to the next level - I will be able to apply it in other languages.

3. There's something about Prolog that makes programing feel like this: a ton of thinking up front, but once you do it, the implementation is bug-free (versus lots of fiddling around in other languages). I guess that's probably true of non-imperative languages in general, but it was a nice change of pace. I am not a full-time programmer anymore (product manager) so for me coding is mainly fun/intellectual and doing it in a way that removes the fiddling is great.

4. Prolog feels very retro, I can't really explain it but it doesn't try to be cool - it's the opposite of say the latest JS framework that is polished to the 9s. Prolog feels more like the cockpit of a fighter jet - sparse and powerful, but you have to know how to use the tools.

5. The community is cool. I found the IRC channel and folks were very generous with their time, though to be fair I asked questions like: "I implemented X like this, is that canonical?" vs "how do I do X?"

6. I am glad to have Prolog in my toolkit. I can imagine problems down the road where logic programming is the best tool and I know I can reach for it now.

Bonus: a few posters in this thread asked about the ROI of learning a retro language given that they could be learning something else. I think it's totally up to you, but looking back on the list above, almost none of the benefits I enjoyed are that I now know Prolog. So I guess you have to pick and chose your challenges, I wouldn't get the same value from learning Fortran77 and I wouldn't expect anyone else to either. Prolog is different enough from whatever else you're doing that it can grow you beyond learning some minute detail of React Hooks (which are great, btw!)


I used Datalog a good deal in grad school (CS). It was frustrating at first, but once I figured it out, I liked it a lot.

If you've never tried a declarative programming language, you should give Datalog a try.


+1 to Datalog -- it is fantastic for static analysis in particular! There are a bunch of papers from Yannis Smaragdakis' group on this; I built my thesis work on top of their system Doop [1] which is a whole-program points-to analysis written completely in Datalog.

In general it's very nice to be able to prototype queries/inference rules quickly and then tweak clause ordering, etc for performance later if needed.

[1] https://bitbucket.org/yanniss/doop/src/master/


Probably worth mentioning for those interested in Datalog that there's actually a growing selection of databases for Clojure that use Datalog as their query language. These Clojure variants of Datalog (they model triples as Clojure data structures) are basically becoming as ubiquitous in Clojure as SQL is elsewhere.

I have documented them here: https://github.com/simongray/clojure-graph-resources#datalog


If I may hijack the Clojure reference, how does Clojure’s core/logic compare to Prolog? Do they broadly fit in the same problem space?


I think core.logic is basically just a Clojure implementation of miniKanren. I'm not too familiar with logic programming myself, but my impression is that miniKanren, core.logic, and Prolog all use basically the same algorithm, operate on the same type of data, and try to solve the same set of problems. The advantages/disadvantages are probably external to the core problem space, e.g. do you prefer an embedded DSL better to a full-blown logic programming language and so on.

I'm a hardcore Clojure advocate (it's a fantastic programming language with a great ecosystem of libraries), but there's no doubt that the amount of resources available for learning the API of core.logic is less than what's available for Prolog. If the objective is to learn logic programming from the available resources, Prolog is probably the better option.


TerminusDB, Crux and Grakn are other databases that use Datalog variants. Becoming a big thing.


You might like Flix then: first-class datalog constraints https://flix.dev/


Every developer should learn Prolog or a logic language in their career. It's mind-bending in the best way, and opens your eyes to how a computer can do work for you by you describing the problem instead of the solution.

SQL is a language that could have been a Prolog. In fact, querying in Prolog in general is absolutely beautiful.

Prolog is homoiconic like lisp; extending the language with new features is trivial as it requires no new syntax.


at a higher level, don't all programming languages do some amount of work for you? like fundamentally all programming boils down to manipulating electrons in a circuit board or something, it's just different programming languages from assembly to SQL provide different ranges of abstraction for what they handle vs what the programmer had to handle


I was surprised to learn prolog is used in inductive automation's ignition SCADA product when it barfed out a prolog error after I entered an invalid expression. https://inductiveautomation.com/


With a name like "inductive", it kind of makes sense :)

I wonder if it's used for their alarming logic.


> I wonder if it's used for their alarming logic.

If you need easily verifiable rules for your rules engine, it's hard to go wrong with Prolog.


Prolog is great and kind of weird in a good way. One other benefit is that you can pick up erlang really quickly if you know prolog. Erlang was first prototyped in prolog and that prototype’s host language heavily influenced erlang’s syntax. Why learn erlang? Check out the high scalability of WhatsApp.


Even more than Prolog, I am really excited by the possibilities of Answer Set Programming as exemplified by Clingo (See https://potassco.org)

The main feature of ASP that I like is the ability to handle Now, ASP is not a full-fledged language but instead should be viewed as a workable pure-logic based notation for expressing some kinds of problems. In particular negation is handled really well. Its not easy to learn but is well worth the effort.


I used prolog in anger when I first started at a startup some years ago (why that was is another story) and while I found it interesting, it was really quite an ineffective experience for me.

Firstly, by being so declarative if was very easy to end up with O(a^n) algorithms by mistake and secondly I found, at least in what I was doing, that for example the ordering of declarations would change program behaviour which made it imperative in a very non-ovvoous way.

I also found it hard to debug and diagnose the code or find decent resources online for it.

Perhaps I didn't do a good job with it or was missing details on exactly how best to use it but I found it was almost too declarative to the point of the rubber never hitting the road in a controllable way.


I had the same problem when trying out Haskell:

they give abstractions that are not constant multipliers in terms of speed or memory usage compared to a low level solution (like garbage collection and dynamic typing).

Rust gives just the opposite type of abstraction over C++: memory safety without adding any unpredictability in execution (and of course updates in the language design).

While relational algebra (SQL) is equivalent to logic algebra, it's easier to reason how it executes.


you can still get tripped up on SQL (accidentally doing a table scan etc.) but there is explicit tooling to help you analyse that - e.g. looking at the execution plan.

Haskell I can't comment on, though I can totally see that happening there, there might be better means of managing complexity and giving hints to actual implementation.

With prolog the means of expression seems capable only of posing the 'question' rather than indicating that it should be not be implemented in the most naive possible way.

As interesting and mind-bending a language as prolog is I just couldn't get past that and it has made me think it's just not a practical language to use for anything real. But I am happy to be proven wrong!


"The Power of Prolog" is a youtube channel for learning about Prolog https://www.youtube.com/channel/UCFFeNyzCEQDS4KCecugmotg/vid...


> To be honest, at the moment such problems are too rare to justify learning Prolog. But I don’t believe that this has to remain so.

And then the whole following section on unfulfilled potential.. Is there any reason to believe the paradigm will somehow come into its own in the future? The way this question was addressed by the article was way too wishy-washy for my taste.


I wrote about my experience here, and linked to an older article with related observations about why Prolog is a niche technology: https://news.ycombinator.com/item?id=25653425


I wonder... is it still justified to learn Prolog now? Aren't there better alternatives for logic programming in many other common programming languages? I mean http://minikanren.org/


How is this a better alternative?

A major attraction of Prolog is that one can remove almost arbitrary parts of any pure program and still infer useful conclusions that hold for the original program. For example, one can remove a clause and thereby make the program more specific. One can remove a goal and thus make the program more general.

This is possible due to the way the language is designed, the interplay of syntax and semantics enables this. The program fragments serve as explanations that can be shown to programmers to pinpoint mistakes in the code. Thus, when narrowing down mistakes, it becomes possible to focus on smaller portions of the code that are therefore easier to understand.


>> For example, one can remove a clause and thereby make the program more specific. One can remove a goal and thus make the program more general.

Woa. I'm pretty sure that the subsumption order and the implication order are not common knowledge among Prolog programmers. Have you been reading Plotkin by any chance? :)


I meant "better" in the sense that one can do logic programming from whatever language one is familiar with already, instead of installing and learning a whole new ecosystem. We would still have to learn logic programming.


Why wouldn't that be the case for e.g. LogicT?

Prolog is _larger_ than LogicT. It is the additional syntax and runtime system in a custom Prolog implementation that needs to be defended, not the lack of it!


miniKanren is great. Here[1]'s Will Byrd on the difference between it and Prolog.

[1] http://minikanren.org/minikanren-and-prolog.html


I've spent time trying to wrap my head around ideas in minikanren and move them out of scheme. But while reading and playing with minikanren has been quite illuminating, it has yet to help me solve many "real" naturally occurring problems. And indeed, some of the motivations for minkanren are aesthetic or philosophical stances, rather than pragmatic (wanting to avoid having to use the cut operator etc). Most of the work around minikanren seems more aimed at piloting approaches or techniques which could be used to produce useful tools, rather than actually producing useful tools.

But perhaps someone who has used languages in this area more broadly than I can comment: Which is a better bang for your buck? - Prolog - "Functional logic languages" e.g. Curry, Mercury - Clojure + core.logic - scheme + minikanren


There is also LogicT: http://hackage.haskell.org/package/logict

Since you'll eventually need some business logic, I don't see much point in Prolog either nowadays. Prolog-like code is a tiny part of your app, no need for a dedicated syntax. Do-notation is sufficient.


>> I wonder... is it still justified to learn Prolog now?

Unfortunately no. There was a great push to develop the theory of logic programming, based on resolution theorem proving, back in the '70s and '80s. This effort culminated in the design of Prolog, a general-purpose logic programming language with a resolution theorem-prover. Shortly after the AI winter of the '80s hit and research in logic programming was severely disrupted. Research and progress continued but at a much slower pace and the earlier heights have never been reached again since. Consequently, we don't have a "better Prolog" because there are very few people who can even imagine what such a "better Prolog" would look like, let alone design and implement it. Alas, we have ran out of Robinsons, Kowalskis, Colmerauers, and so on.

Another reason of course is that Prolog is a very fine-tuned language that gets many things right because its design makes very sensible trade-offs. For example, the much-maligned depth-first search and the consequent clause ordering, absence of an occurs check, negation-as-failure, extra-logical features such as the dynamic program database, etc, are all pragmatic choices that balance the need for a general-purpose language that works, with aspirations of theoretical purity. That is to say, there is nothing fundamentally broken about Prolog, it's acutally a very good logic programming language and it's just not that easy to design a radically better logic progamming language, especially one based on resolution.

Regarding minikanren, my understanding is that it's not meant as a "better Prolog", rather it's meant as a small, embedded Prolog (and here by "Prolog" I mean a logic programming language based on resolution, in the same sense that Scheme is "a Lisp"). It's meant for programmers of imperative languages who would like to integrate logic programing functionality in their applications. Minikanrens (there are many) can essentially be imported as libraries that allow logic programming in a syntax closer to the embedding language. The alternative would be either to switch entirely to Prolog (which of course is a huge decision), or to write a buggy, slow version of Prolog in the programmer's preferred imperative language.

Minikanrens have various differences with Prolog, such as absence of side-effects and a dynamic database, a different search strategy, occurs check, etc. but these are not unmitigated improvements. They are different trade-offs than the ones chosen by Prolog.

Note however that resolution-based languages like Prolog are not the only logic programming languages. For two prominent examples of alternative approaches, see Datalog and Answer Set Progamming. Both sacrifice Turing-completeness but offer, e.g. decidability, the ability to benefit from the power of modern SAT solvers, classical negation, etc.

Again, there are trade-offs. The thing to understand is that theorem proving -proving the truth or falsehood of statements in a formal language- is damn hard, even with computers, and there are always going to be trade-offs when the purpose is to have a real-world application.


If you want to make/prototype any sort of complex text manipulation (parser, interpreter, compiler, etc.) or have a desire to make purely functional list processing programs, Prolog is by far the best in terms of flexibility and ease of use. DCGs are miles ahead of everything else in this area, and this is not surprising since that was exactly the expected use case when Prolog was designed.


This inspired me. What's the best book for modern prolog?


I recently started going through the book Thinking as Computation: A First Course by Hector Levesque, and I am finding it rather nice. It goes slow, but I like slow. It uses Prolog as the computational implementation of the thesis that presents thinking as computation.

https://mitpress.mit.edu/books/thinking-computation

Other books I have but haven't gone through yet are Adventure in Prolog and The Art of Prolog.


I liked "The Art of Prolog" for learning. And then if you dive deeper, "The Craft of Prolog". I'm curiously amused that the price for used versions on Amazon is so high.

https://mitpress.mit.edu/books/art-prolog-second-edition

https://isbn.nu/0262192500

https://mitpress.mit.edu/books/craft-prolog

https://isbn.nu/9780262512275


I did some further research and it seems like The Art of Prolog gets the most love even though it's from the mid-nineties - and still hella expensive like you say!

The only thing I'm wondering about are skipping any important developments made in the last 25 years, but I guess I can always jump into the more up-to-date online resources by then.


The Art of Prolog is available as a free download (PDF) on the MIT Press website under „Open Access“.

AFAIK the biggest item that’s missing is constraint logic programming. Markus Triska discusses CLP in his Power of Prolog book.


Thank you for pointing that out.


The Art of Prolog is available as a free download (PDF) on the MIT Press website under „Open Access“.


The blog post refers to this:

https://www.metalevel.at/prolog


By book, I meant a dead tree version that I can read in bed or in a sofa. I'm not great at taking in large amounts of new information from a website. Sorry if I wasn't specific on that.


For starting out I'd recommend Clause and Effect by Clocksin. It's a very short but practical introduction with really good examples along the way (including symbolic differentiation and FFT).


It might be easier to learn prolog, if non-essential costs were removed, by using a more familiar language as a wrapper[1], or for metaprogramming. Prolog's value lies in its database core. But much of the cost of learning a new language is elsewhere, in dealing with yet another different set of ways to do familiar things. With prolog, those differences are relatively uninteresting, so why not punt?

Not having to deal with "how do I read a csv file in prolog?" and such, frees time for richer ideas, like prolog search is a poor fit for most problems, but prolog is a nice language for writing problem-appropriate search.

[1] https://github.com/yuce/pyswip


That's my hunch too. Prolog is a different world and you still want to keep one foot in more mainstream languages. The idea should be use Prolog where it gives real advantages and use mainstream languages where it doesn't. So how do you divide your application into those two "islands"? Answer: Use persistent Datalog for database/persistence but do the rest in you bread-and-butter language.

Basically saying replace SQL with Datalog.

That is just what I think I'd like to do. I don't know what would be the most practical platforms to do that, which mainstream language + which Datalog database. Any recommendations?


No recommendations, sorry, merely a couple of thoughts.

Prolog and datalog start out as rather different beasts, though prolog tabling gets them closer. It's an area of current research[1].

A datalog implementation can serve as DSL, learning tool, and/or efficient solver of large problems. For the first two, most languages have a little datalog implementation or three.

For datalog engines, perhaps Z3 and its fixpoint engine from python?? Or maybe Soufflé from python?? But I really have no idea what's plausible. I too would be interested.

> database/persistence

That's something with a big diverse active market. Datalog engines, not so much. So I'm unclear that datalog is a place to go for that? But I also pay no attention to non-open-source offerings.

[1] https://arxiv.org/pdf/1909.08246.pdf


Because you will likely be glad you did. Then go look at PicoLisp, which has Prolog baked into it. https://picolisp.com/wiki/?home


Is pilog just a mini logic programming language implementation or what you would call a full prolog engine?


Sorry, I'm not qualified to say. I cut short my dives into both Prolog and PicoLisp. Too many segfaults (myfaults) in PicoLisp. I need more hand-holding or time. I'm keeping my Prolog books though. Will return to them _someday_. Firmly of the opinion that if I could learn one prog-lang Matrix-style (plugged into my neck), it would be PicoLisp. Then I'd port it to run on the BEAM.


I also like the concept of picolisp, but ran into a lot of segfaults myself and eventually gave up. The learning material needs more handholding for sure. Also, I'll never be able to run that in production at my place of employment, so I could only use it for hobby projects.


Going to give a shout out to "PiCat" as a gentler introduction to getting your Prolog brain installed.

http://picat-lang.org/


I've wondered this a few times myself - I like learning languages (even if not directly applicable to paid work) and Prolog has been on the list, but I wonder if the concepts are better left on their own rather than packaged into a DSL.

If I understand constraint propagation and backtracking, why not just use those concepts when it fits the problem rather than an entirely new language meant specifically for that narrow range of problems?


Yeah I think that is a better solution. Somebody who worked on the Mercury logic language for 10 years ago said the same thing here:

https://old.reddit.com/r/ProgrammingLanguages/comments/9kb9z...

I have worked on and with Mercury for about 10 years. I had very little logic programming experience before this, while others tend to learn Prolog first.

I like Mercury, but not for the logic programming features, in practice they aren't used very often, there's not many parts of a program that require backtracking. So on reflection, I don't think it's something I'd include in a new language, it's not worth the implementation and maintenance costs. Not for most programs most of the time anyway.

On learning logic programming: It might sound like I'm discouraging anyone interested in this, far from it. Logic programming is a great paradigm to learn, another conceptual tool in your toolbox etc.

via https://news.ycombinator.com/item?id=18376593


Prolog excels at solving NP complete problems (e.g. the SAT solver) or combinatorial problems in general that would normally require polynomial-vs-exponential time to find a solution whereas the use of Prolog allows to arrive at the same solution in the linear-vs-quadratic time. It is not really possible to accomplish it with a DSL.

Another amusing property of Prolog is that a typical Prolog program can be run forward... and backward, i.e. it is possible to «ask» a question and receive an «answer», or if the «answer» is known, the program can arrive at a set of all possible «questions». Hence heavy Prolog use to build expert systems and alike.

Back in the days, we used to giggle that if the 42 answer was fed into a Prolog problem solver to arrive at a full and exhaustive set of questions about the meaning of life, the universe and stuff, that would instantly create a singularity at the point in time and space where the solver was run and henceforth result in a collapse of the universe unto itself.


I remember asking a similar question to my Prolog lecturer back in 2008 during my masters. He was visibly taken aback from it, I think he didn't expect it. It went something along the lines of "I've never heard of this language, does anyone even use it, what's the motivation for wasting a term on it" (yes, it was one of my more douchebaggy moments...). He had to go do market research and come up with an answer in the next lecture, hahah.

Yet, once it all clicked, I loved it, and it opened up a new way of thinking for me, which was different from both imperative and, dare I say, functional paradigms.

I also learned that, while fairly specialised, it has very real industrial applications (e.g. there's a super important critical infrastructure component in Microsoft built entirely on prolog if I remember correctly). Though, admittedly, I think most prolog finds uses as 'pluggable components to larger non-prolog projects' these days.

Also, another interesting direction in Prolog is its fuzzy / probabilistic derivatives. Very cool stuff.


>> I also learned that, while fairly specialised, it has very real industrial applications (e.g. there's a super important critical infrastructure component in Microsoft built entirely on prolog if I remember correctly).

Was - in Windows NT. It was used for netowkr configuration and it was only partly programmed in Prolog:

Microsoft's Windows NT operating system uses an embedded Prolog interpreter to configure its local and wide-area network systems. Interdependent software and hardware components are abstracted into a simplified object-oriented framework using declarative information provided by each component's installation script. This information, along with the Prolog algorithm, is consulted into the embedded interpreter, which is then queried to construct the most usable configuration. The algorithm which solves the plumbing problem is described, including its positive and negative constraints, Prolog database and efficiency considerations. The results of the query, stored into NT's configuration database, inform each component of its load order and binding targets. A description of theC++ wrapper class is given and portation considerations are discussed. The Small Prolog interpreter is briefly described.

https://web.archive.org/web/20040603192757/research.microsof...

I'm not sure how "super important" that was to be honest. My impression has always been that the programmer in question simply wanted to use Prolog in his day job (a feeling I can very much sympathise with).


Logic programming is one of those paradigms where if you haven't learned it you're doomed to repeat its mistakes. As with Lisp, Prolog makes you a better programmer even if you never end up using it.


I've worked with Prolog and ASP (Answer Set Programming) in some planning/optimization projects. In my experience in every case ASP had better expressiveness.

If you are seriously considering logic programming, please do not stop with prolog, but also take a look at ASP.

For anyone with a different experience than mine - what is your reason to stick to prolog instead of use ASP?


Learning Prolog is a lot of fun! It will not take more than an a day or two - it’s simpler than python.

Download SWI prolog, and get an editor that allows interactive programming with it. Emacs is of course the best for any obscure hacker language, but there may be plug-ins for other editors.

Predicates are like functions but instead of returning you simply leave some parameter as a variable, and the “return value” is prolog finding what the value of that parameter can be.

Making web apps with the built in libraries and library(persistence) is a lot of fun - you can make a real webapp with a relational database with zero dependencies but prolog. Programs are small. Highly recommend for anyone who loves programming.

Only thing I wish for is no effects but instead it being pure like Elm, and a type system rather than runtime type errors.


Which makes me wonder is there a statically typed prolog like language out there?



I learnt Prolog in the late 80s at Uni, when even then, it was a pretty odd thing to be interested in.

It's nice that the community has kept going with it as an intellectual idea. I'm wondering what contributions it has made to the wider programming community. I guess things like the warren engine may have helped inform efficient methods of interpreting other languages, or the parallel prolog investigations probably again helped other languages.

It's pretty clear to me that functional languages from the time (e.g. Miranda) which live on it Haskel have had a massive impact on modern programming languages, and approaches to resolve complex threading problems. I've a feeling the prolog innovations may be quite wide too, just maybe less obvious.


I caught Chris Martens' talk at Strange Loop 2013 on linear logic programming. Sadly what I understood I've long since forgotten, but if you're interested in looking at some of the logic programming space outside Prolog itself, worth a gander.

https://www.infoq.com/presentations/linear-logic-programming...

https://thestrangeloop.com/2013/linear-logic-programming.htm...


This brings up memories from 7 years ago. While in Uni, we had a homework: "Searching in an infinite space using Prolog".

Unfortunately, the comments are in my native language, but the assignment was to search for a box in an infinite space and bring it back to (0, 0).

It was fun: https://github.com/mateioprea/Searching-In-An-Infinite-Space...


To save you a click:

- Because the author likes it.

- Because it's different.

- Because very few other people like it.

Is this really an effective pitch to students who are looking for an efficient way to use their limited resources?


The article is much more convincing than what you reduced it to.


I still do not understand why they do not make Prolog that can do infinite. The calculation should be lazy parallel instead of serial. I think this might solve confusing problems that arise with bigger Prolog programs.

For example WRITE-statements can be lazy parallel too -- They would only start producing text when the answer is definite. Does not happen in any particular sequence anymore, but you must account that and use appropriate headings.


When I hear "Prolog", it reminds me of the quote about Prolog:

"The elegant solution is not efficient. The efficient solution is not elegant"


On the other hand, Richard O'Keefe claims in _The Craft of Prolog_, that "elegance is not optional":

""" Elegance is not optional.

There is no tension between writing a beautiful program and writing an efficient program. If your code is ugly, the chances are that you either don’t understand your problem or you don’t understand your programming language, and in neither case does your code stand much chance of being efficient. In order to ensure that your program is efficient, you need to know what it is doing, and if your code is ugly, you will find it hard to analyse. """

https://programmingisterrible.com/post/39499310419/elegance-...


Common nitpicking, here for fun: both sentence kind of repeat one another, as they mean nearly the same thing. They could be expressed as one more symmetrical sentence, something like: "the set of efficient solutions and the set of elegant solutions are disjoint" (they have no common element).


"Efficiency like quantity has a quality of its own"


I learnt Prolog before so Erlang's syntax does not look so scary to me.


Prolog is a wonderful language to dabble into, if for nothing else but to learn how to think more logically.



We were taught Prolog in the late 1990s at university. Lisp or scheme would have been a better choice.


Please improve the style of your webpage. It seems to just serve up the text directly, leading to far too many characters per line, and making it extremely difficult to read. Ideally your line length should be somewhere around 70 to 90 characters.


You could always resize your window to a comfortable width...


what are some good resources to learn Prolog


Here are some resources:

https://github.com/klaussinani/awesome-prolog#resources

I recall going through the Adventure in Prolog one (free online resource) a while ago and enjoying it, helped direct a lot of exploration into the Prolog language and its capabilities for me. I have Clocksin & Mellish's Programming in Prolog (2003, 5th ed.) and liked it as a good introduction (or more expansion, I had dabbled in it previously and "knew" the language but not the full depth of it). I've read good things about The Art of Prolog by Sterling and Shapiro, but have not read it.


No.


A couple years ago I hacked on a Python type inferencer someone wrote in Prolog. I wasn't enlightened, despite expecting to be, from a bunch of HN posts like this.

https://github.com/andychu/hatlog

For example, can someone add good error messages to this? It didn't really seem practical. I'm sure I am missing something, but there also seemed to be a lot of deficiencies.

In fact I think I learned the opposite lesson. I have to dig up the HN post, but I think the point was "Prolog is NOT logic". It's not programming and it's not math.

(Someone said the same thing about Project Euler and so forth, and I really liked that criticism. https://lobste.rs/s/bqnhbo/book_review_elements_programming )

Related thread but I think there was a pithy blog post too: https://news.ycombinator.com/item?id=18373401 (Prolog Under the Hood)

Yeah this is the quote and a bunch of the HN comments backed it up to a degree:

Although Prolog's original intent was to allow programmers to specify programs in a syntax close to logic, that is not how Prolog works. In fact, a conceptual understanding of logic is not that useful for understanding Prolog.

I have programmed in many languages, and I at least have a decent understanding of math. In fact I just wrote about the difference between programming and math with regards to parsing here:

http://www.oilshell.org/blog/2021/01/comments-parsing.html#w...

But I had a bad experience with Prolog. Even if you understand programming and math, you don't understand Prolog.

I'm not a fan of the computational complexity problem either; that makes it unsuitable for production use.


Prolog is very bad to learn.


It can be hard to learn, as it's a bit mind bending to think about writing programs more as objectives and relations, than as imperative code. But it's certainly not bad to learn. It's helped me out a few times over the years, using it directly or using the ideas of it for a specific application (sort of a Greenspun's tenth rule applied to Prolog).


Mind expanding on why you think this is so?


Only a handful of people can earn a living becoming academics and we're in a time where many people are being "pushed" into credentialing up. Forcing people who are going back to school to further their professional careers to learn a language that isn't marketable makes the degree they're trying earn worth less.

Please think harder about why you're offering this class and consider offering something else instead that will provide more value to the adults trying to compete in this flat world.


I've profitably used the domain modeling and program design lessons I've learned from Prolog multiple times, please and thank you. You don't need to use a language in production for it to be a valuable addition to your toolkit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: