Prolog, where you just type what you want to achieve, and it goes ahead and does it for you. I really couldn't wrap my head around it for some time. I read up on backtracking and chased around derivations on paper to satisfy myself that something like append really worked.
If you're asking "are there potentially awesome applications of this?" then the answer is probably "yes".
If you're asking "is there anything in the wild that I would know of and uses this?" then the answer is probably "no".
Most of the prolog projects I've ever seen are pretty inbred; implementing compilers or interpreters, hooks for another language-type things. Though it must be said that I'm not deeply embedded in the prolog community.
In my 4th year university project we worked on an expert system for a card game in Prolog... For doing an expert system based AI, prolog is pretty good...
It tells you that Prolog's list append function has far more general applicability than just appending lists. Because of the way Prolog is designed you can use that function definition to do destructuring pattern matches on lists - to find differences, etc.
It means the tools you write for one use can be used completely differently than you intended, and in ways that continue to give you those "whoa!" moments.
As has already been mentioned, thats not a function, it is a functor. It indicates a relation between two objects. Specifically, it tells the prolog runtime that a list appended to an empty list is just the list itself and that the resultant list and the first list argument should have the same prefix. Another thing to note is that the append function piggy-backs on the prolog version of list-appended cons-cells so that append function example is a bit deceiving. However, the power of prolog should not overlooked. For example, a call to append(X, Y, [1, 2, 3]) would have returned all possible values of X and Y which append to [1, 2, 3]. Something much harder in other languages.
An extremely (imho) good source for information on logic programs and how they are implemented is here: http://www.ida.liu.se/~ulfni/lpp/
A more practical guide to actually compiling these logic programs efficiently (or as efficient as they can be) is documented in Modern Compiler Design by Dick Grune et al. They actually describe the workings of a somewhat simplified workings of a Warren Abstract Machine (WAM) which can (to my knowledge) now approach to within an order of magnitude of standard programming languages. Pretty amazing when the generality of logic programming is considered. Details on the WAM here: http://en.wikipedia.org/wiki/Warren_Abstract_Machine
If one keeps learning, dreaming, and building in general. I am just saying that no matter how many amazing things you've discovered, there's always more in store. Just never stop looking.
And Hal Abelson knows how to give fledgling programmers that Eureka moment in spades. This article mirrors my experiences very closely -- I came into the class Spring 1998 with BASIC and assembly experience, thinking I knew how to program.
SICP couldn't be worth any more if it were printed on gold leaf, but having Abelson and Sussman there to teach it really seals the deal.
It is very much like paper. (The display area is smaller than the average hard-cover book, however, but that doesn't bother me too much.) The only thing I don't like about the Kindle is that you can't skim to get an overview of the book or newspaper... you basically have to use the table of contents or read it in order. Not so bad for books, kind of annoying for local news.
(The killer feature of the Kindle, though, is the effortless buying feature. When you have to pay $30 and go to the store or wait 5 days for shipping, you tend to not buy books. When you can click a button and start reading on a paper-like screen instantly, it becomes much easier to read all the books that you've been "meaning to" read. But I digress.)
Someone should really make a hybrid E-Ink/LCD reader. A transparent LCD overlay (with backlight) for skimming, that switches off after you stare at the same page for a few seconds and re-displays the current rendered page on the E-Ink display underneath.
Wish there was a way to read this online but The Little Schemer http://www.ccs.neu.edu/home/matthias/BTLS/ made the first couple chapters of SICP a little easier for self study.
I loved this same course (spring '87 for me), but I wasn't impressed with the speed. I came away thinking Scheme was for fun and C was for practical programming. I've come around since then, and I don't have any performance concerns about my Scheme-based web site.
OurDoings is it. I still use the "wishful thinking" programming methodology I learned from SICP. I'm married with 3 kids and a separate day job, and don't think I could have pulled this off using a less programmable programming language.
That's cool. Could I interview you about developing web apps using Scheme? I'm working on a blog post / interview on the topic w/ all the people I know who do Scheme web development.
Anyway if you are interested I'd love to hear from you, my email is: idoh@idoh.com
In general, you can do this in any language which lets you pass functions around as values, and which supports closures. And languages in which you can fake it somehow.
An excellent recent book is Peter Seibel's Practical Common Lisp. http://www.gigamonkeys.com/book/ The approach is what you might guess from the title, so it addresses both the fun of Lisp and the building of practical applications (including web apps).
Funny how tail recursion pretty much has "cute optimization hack" written all over it. It really doesn't feel at home in a first lecture on programming.
Tail recursion has profound theoretical implications, so it belongs there. If your language is properly tail-recursive it completely changes definition of recursive and iterative.
Interesting first lecture. I think the great Oscar Waddell (a wizard though no longer in academia to my knowledge) wrote a Scheme program centered around beer for our first lecture at IU. Now that caught the attention of many students and I'm pretty sure the class /grew/ in size for the next lecture.
the four concerns the author lists at the end are interesting for what they say about where he's coming from. to me, the joy of programming is more about programming than speed or efficiency.
that he also put 'harder to understand' and 'capability' in the list is confusing. he must mean capability in some kind of raw sense, whereas i see capability as having abstraction and libraries so i can get to the meat faster. again, abstraction is usually easier to understand.
perhaps when meat is speed and efficiency then our perspectives merge. psyched about 16 Mb of RAM!
btw, scheme has recently been replaced by python in the intro programming courses. snap.
That derivative example is nice, but it reinforces that Lisp is a bit too "pure"/"mathy" for my taste (that and I really don't like the syntax). My equivalent of that derivative thing was probably the examples in "Why's Poignant Guide to Ruby", pretty syntax-highlighting colors and all.
And almost makes me want to invest in SICP or something similar to see what all this LISP-talk is about :)