I knew that Erlang was originally written in Prolog, but I somehow managed to never look at any Prolog code before. It explains so much about the structure and approach that Erlang took.
I am curious to explore how the Erlang interpreter was written, in light not of the similarities, but the subtle differences between the two languages. The use of -> was particularly interesting in comparison to how it is used in Erlang.
In Prolog (as with Haskell, OCaml, and others), you can define new operators with user-definable meaning and precendence. ->, !, and some other operators used in Erlang were left undefined in Prolog specifically for making DSLs. (That's also why Erlang code looks approximately like Prolog code with a bunch of extra operators added - it was a Prolog DSL.)
Prolog is a pretty good language for quickly protyping interpreters.
Creating a simple, inefficient Prolog interpreter (that has most of the core functionality) is pretty easy (as far as implementing languages goes), especially if you're using another functional language to do it. I think it can be done in 100-200 lines of Haskell or ML pretty easily (although I haven't actually implemented one myself).
Implementing an efficient Prolog involves implementing the Warren Abstract Machine (WAM) [0]. I just implemented a very small and restricted subset of the WAM with a friend for a class final, and doing that is much harder. (Largely because there's not a whole lot in the way of documentation, and even the book that's supposed to serve as a WAM tutorial [1] is light/vague on some of the trickier implementation details.)
Another major direction for practical Prolog implementation is constraint programming. I've been studying that off-and-on for a while, and Daniel Diaz's papers on clp(FD) (http://cri-dist.univ-paris1.fr/diaz/publications/) and adding CLP to the WAM seem to be a good intro.
As silentbicycle mentions below grin, I enjoy pointing out miniKanren as a purely functional approach to building a Prolog-like system. Even at ~200 LOC the original Scheme is surprisingly efficient. My optimized Clojure version at ~1000 LOC is already closing in on SWI-Prolog on logic programs that involve a lot of unification and uninstantiated variables (such as the classic Zebra puzzle).
The high water marks for efficient Prolog implementations are Peter Van Roy's work on Aquarius and The Mercury Programming Language.
Mercury is a qualitatively different language, by the way - it adds static typing (which I'm ambivalent about, in a LP language) and AFAICT removes the ability to work with partially-instantiated data, which is one of my favorite aspects of Prolog.
Partially-instantiated data is cool/fun, but I'm starting to question its utility in programs you would actually employ for something useful. Don't hold me to that though :)
They're a good compromise between mutable and fully immutable data structures? "Oh, that? It was always there, we just didn't know about it..." Easy example: Incrementally appending to a list of conses without reversing it in the process.
Better example of working with partial information: Constraint programming.
Depending on how large a fragment of Prolog you want to implement, it can be quite simple. SICP chapter 4.4 (http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-29.html...) describes a simple interpreter written in Scheme for a Prolog-flavoured languge (without cut, and also no parser).
Strictly speaking, SICP's amb implements nondeterminism (via backtracking), not Prolog. Prolog also needs unification (a more powerful form of pattern-matching), at the very least. (EDIT: Forgot about the next section, with Logic Programming.)
Many Lisp books implement Prolog. Try PAIP and/or On Lisp. PAIP puts more time into implementing it well, but On Lisp covers the main ideas. I recommend both, regardless.
Amb is in SICP section 4.3 ("Nondeterministic computing"), section 4.4. is called "Logic Programming" and is implemented using streams rather than amb. I agree that On Lisp, which does it via amb, is also nice!
Among the basic Prolog interpreters is the one in the later chapters of our patron Paul Graham's On Lisp, available online for free. Graham writes it with Common Lisp macros and macro-based continuations all in about 100 expertly described and lovingly documented lines of CL.
It's called "mini-kanren", see also William Byrd's thesis. Or wait for swannodette to show up. :) He's writing a Clojure port: http://github.com/swannodette/logos
This was like 3 years ago, so I don't exactly remember, but it was something like a month-long assignment. So I started it like 3 or 4 days before, and coded furiously.
The professor has taken down the course website, or I'd point you right to the assignment.
Really, my sibling commenters have described it well: it's probably between 200-600 LOC, getting a simple implementation is easy but getting a performant one is hard, etc.
I did some Prolog for the logical programming course on university a long time ago. Back then, the hype was that Prolog was the next thing for artificial intelligence and reasoning.
It seems it didn't really meet that promise -- 'analog' approaches such as SVMs, Bayesian networks have turned out to be much better than logical reasoning for most real world AI approaches.
Why use Prolog (except for curiosity)? Is it easier to make some kinds of programs in it? (which are useful in the real world, not backtracking AI for simple games)
What annoyed me most about Prolog: if you type something on the command line, the syntax is quite different (assert, etc.) than if you put it in a file and then load it.
Sterling and Shapiro's _The Art of Prolog_ is wonderful, on par with (say) SICP. It focuses as much on the logic programming model as Prolog proper.