Losh doesn't use Church numerals, though, he represents numbers as lists. The fact that lists are also represented as functions is a diversion; his number three doesn't represent a function f -> x -> x that returns the threefold composition of its first argument applied to its second.
I didn't get too much into the original article but my impression is that he is using Scot encoding (essentially a 1-to1 translation of pattern matching) instead of church encoding (somethign equivalent to folds).
Scott encoding doesn't get much publicity but its perfectly valid and much more intuitive, IMO.
There must be some confusion here. For "the threefold composition of its first argument applied to its second" the type should be "(x -> x) -> x -> x".
For anyone curious of the λ-calculus, and why the Y-Combinator plays an important role in the turing completeness of this, please check out the excellent Y-not series by Jim Weirich[0]
[0] http://www.neo.com/2012/11/13/y-not-adventures-in-functional...
Turing-completeness, of course. After that, it's all a matter of allowing you to stay as focused as possible on the task at hand: some reasonable way to express your chosen paradigm, library support for things ancillary to your actual task, and some measure of support in the tools you find useful.
It actually can have some benefits -- a language without any unbounded loops or recursion isn't Turing-complete, but you do get a static guarantee of termination. NASA, for example, writes much of their C code only with bounded recursion. So they can't compute some things, but they can be sure that the Mars lander never gets stuck in an infinite loop.
"some measure of support" like a way of interacting with the outside world. The pure SKI calculus is Turing complete, but no amount of library code (also written in the pure SKI calculus) or syntactic sugar for expressing your favorite paradigm will let you write cat(1) in it.
If you're soft real-time so that performance is a correctness issue you need more than Turing-completeness. I don't know off the top of my head if there's a name for a constant-slowdown universal computer or if such a thing is possible.
It's always cool to see an accessible demo of the lambda calculus, but to nitpick... Isn't it cheating a bit to say this doesn't use Object when, in JavaScript, the persistent arguments object exists and is even explicitly accessible? You're just hiding Object instantiation behind function calls, and using syntactic sugar to access the local object.
I'm not totally sold that objects are a "bigger" language feature than closures, conceptually.
The venerable master Qc Na was walking with his student, Anton. Hoping to
prompt the master into a discussion, Anton said "Master, I have heard that
objects are a very good thing - is this true?" Qc Na looked pityingly at
his student and replied, "Foolish pupil - objects are merely a poor man's
closures."
Chastised, Anton took his leave from his master and returned to his cell,
intent on studying closures. He carefully read the entire "Lambda: The
Ultimate..." series of papers and its cousins, and implemented a small
Scheme interpreter with a closure-based object system. He learned much, and
looked forward to informing his master of his progress.
On his next walk with Qc Na, Anton attempted to impress his master by
saying "Master, I have diligently studied the matter, and now understand
that objects are truly a poor man's closures." Qc Na responded by hitting
Anton with his stick, saying "When will you learn? Closures are a poor man's
object." At that moment, Anton became enlightened.
That is a much more eloquent version of what I was getting at, thank you :) To be clear, I don't really think closures are "bigger" conceptually either. I can't properly recall a time I understood one and not the other... It may be that understanding of both came simultaneously, as a humble student in a single moment of enlightenment.
I'm having to reach back into the dustbin of my mind, but I seem to recall implementing an object system using only closures at one point. As I recall, it had all of the "normal" features of objects - inheritance, member variables, methods, etc. It was done with closures and the members were accessed in a message-passing style (in scheme, (myobject 'show) for example).
Based on that, I think that closures and objects are probably equivalent in their expressive ability. Could be wrong though, like I said it was a long time ago.
A very simple and elegant way to handle inheritance is to give each object a pointer to another object -- let's call it "prototype" -- and delegate any failed member variable or method lookups to the prototype. This is how JavaScript's object system works.
It's better to think of map without thinking of looping. I think its a good idea to think of it as converting a container of A to a container of B. It doesn't matter how it happens, and the container might be empty, or only big enough only to hold 1 item. Loop is beside the point.
Seriously, what I need in a programming language depends on what I intend to achieve with it.
I use blocks in Objective-C and "closures" in Applescript.
to lambda(aStatment)
script intern
say aStatment
end script
end lambda
set a to lambda("applescript…")
set b to lambda("knows how to …")
set c to lambda("make a closure if not a lambda…")
tell a to run
tell b to run
tell c to run
Solid intro into the lambda calculus. Wasn't so into the js examples, but after reading the other comments, I can totally see how a functional/mathematical syntax would have been very disorienting for someone not familiar with the lambda calc. Since those people are obviously the target audience, js seems fitting.
Having used scheme before, and gone through some of SICP, it was neat to see the topic expressed in JS. I think it came through fine, though I'll always have a love for Scheme!
I get that JavaScript is a popular language, but burying a fundamental concept under a cluttered and confusing syntax like JavaScript, when it's much cleaner to explain the math using sensibly notation.
Ugh. It's nice that Steve is reaching down to his audience, but it would be nice for the audience to step out of their muddy sandbox once in a while.
I have a degree in mathematics from the University of Chicago. Want to talk about self-adjoint algebras of compactly supported continuous functions? I'm your man.
I also think teaching this using mathematical notation would be a mistake. People only learn things when they relate in part to something they already know. The original lambda calculus syntax is not only abstract, it was invented before programmable computers existed.
Unless someone is either already comfortable with mathematical notation or has a background in a programming language with similar concepts (Lisp, Haskell, etc.), it's going to be a stretch to teach someone the lambda calculus -- really teach them -- unless it's framed in terms of a language they already know.
JavaScript is a great language to use for this because it has C-like syntax, which virtually all programmers understand, and first-class functions. You could use Ruby, too, but the syntax would be even more opaque.
The point is not that you can represent it better. The point is that you can actually run it. You can't run the original notation (as such, we know languages that are close, of course).
I agree that the terser notation is better if you're just considering the mathematics in isolation. However, I also think that the true power of this kind of mathematics isn't made evident to people until you actually add some integers and make some lists.... from "nothing"... in a language you already know.
It's the same reason he doesn't use lisp or haskell, which are, obviously, vastly more appropriate. The idea is to illustrate something using something people have readily at hand. Once you have the concrete representation of the idea, you can work backwards to whatever notation you desire and forwards again using that notation, understanding that, yes, it has a basis in reality.
It's the same reason people draw 2d graphs to represent machine learning concepts instead of solely relying upon linear algebra notation. Humans are limited in that way: we have to gain some kind of appropriate intuition, and then we can work with the abstractions after that. No one is going to argue that js is an appropriate way to develop the mathematics -- it's just a way to make it more concrete for people, that's all.
The grouping of expressions is clearer to many readers in the JS example. Unless you've spent a decent amount of time grokking the lambda calculus, the second line could be any of:
(λx. (λf. f)) x
(λx. (λf. f) x)
(λx. (λf. f x))
The lambda calculus notation is weird. Using "." to separate parameter and body is unusual. The scope of the body isn't obvious.
JS is painfully verbose, but it's a verbose syntax many people have already put the time in to internalize.
Not everyone is used to read that math notation. Is like say: "Why put it in JS instead of assembler?". JS is probably the most universal language out there (at least in terms of widespread).
I prefer this stuff in python. Less syntax, more clear. But then, is the same thing: You imagine your (math) syntax is better, I think is python, somebody will complain why not haskell, but js is more practical. Or, is just the one the OP like. That is probably the only and best reason.
> Not everyone is used to read that math notation.
Agreed, but it doesn't take more than a couple of sentences to describe the entire syntax of the λ-calculus. It's not hard to pick it up even if you've never encountered it before.
Contrast that with JavaScript, where even as someone who has written my fair share of it, my eyes tend to glaze over when they hit a string of `});}))());};`. Yet people happily write that sort of thing, then turn around and complain about nested parentheses in Lisp!
Encoding integers with lambda is called the curch encoding: http://en.wikipedia.org/wiki/Church_numerals
Functional programming in general is built upon the foundations laid out by him.
Actually the domainname of hackernews (ycombinator) has a lot todo with lambda calculus.