Hacker News new | past | comments | ask | show | jobs | submit login
List out of Lambda (stevelosh.com)
136 points by motter on April 5, 2013 | hide | past | favorite | 47 comments



For people who are interested. The theoretical foundations for this is lambda-calculus created by Alonso Church.

Encoding integers with lambda is called the curch encoding: http://en.wikipedia.org/wiki/Church_numerals

Functional programming in general is built upon the foundations laid out by him.

Actually the domainname of hackernews (ycombinator) has a lot todo with lambda calculus.


Losh doesn't use Church numerals, though, he represents numbers as lists. The fact that lists are also represented as functions is a diversion; his number three doesn't represent a function f -> x -> x that returns the threefold composition of its first argument applied to its second.


I didn't get too much into the original article but my impression is that he is using Scot encoding (essentially a 1-to1 translation of pattern matching) instead of church encoding (somethign equivalent to folds).

Scott encoding doesn't get much publicity but its perfectly valid and much more intuitive, IMO.



There must be some confusion here. For "the threefold composition of its first argument applied to its second" the type should be "(x -> x) -> x -> x".


Right, I was (stupidly, it was early!) just using "f" as a shorthand for a function type like x -> x.


For anyone curious of the λ-calculus, and why the Y-Combinator plays an important role in the turing completeness of this, please check out the excellent Y-not series by Jim Weirich[0] [0] http://www.neo.com/2012/11/13/y-not-adventures-in-functional...


And for a pretty nice demonstration of (untyped) lambda calculus, http://codon.com/programming-with-nothing


All you need is S and K - all the rest is syntactic sugar, even Y.

:-)


SKI combinator is indeed another explanation of why the λ-calculus is turing complete : http://en.wikipedia.org/wiki/SKI_combinator_calculus


   "If you ignore the practical issues of computers like size, weight, cost, heat, 
    and so on, what do you really need in a programming language?"
One instruction

https://en.wikipedia.org/wiki/One_instruction_set_computer


You could have run with a one-point basis: https://en.wikipedia.org/wiki/Combinatory_logic#One-point_ba...


Or just two integer variables, from which you can substract one, add one and compare to zero.


Turing-completeness, of course. After that, it's all a matter of allowing you to stay as focused as possible on the task at hand: some reasonable way to express your chosen paradigm, library support for things ancillary to your actual task, and some measure of support in the tools you find useful.


> Turing-completeness, of course.

Lots of useful programs can be written without turing completeness. It can be useful, but only for a limited problem domain.


It actually can have some benefits -- a language without any unbounded loops or recursion isn't Turing-complete, but you do get a static guarantee of termination. NASA, for example, writes much of their C code only with bounded recursion. So they can't compute some things, but they can be sure that the Mars lander never gets stuck in an infinite loop.


have you got any sources telling about this? I can't seem to find any. Would be a nice read


Here's the official NASA JPL C coding standard

http://lars-lab.jpl.nasa.gov/JPL_Coding_Standard_C.pdf

Page 10 describes loop bounds and recursion limitations.


"Lots of useful programs can be written without turing completeness."

Including Bitcoin transactions - I learned this from elsewhere on HN today.

FYI: https://en.bitcoin.it/wiki/Script


Fascinating. What is this?


I saw a reference to Bitcoin transactions embedding executable content in another thread and was slightly taken aback:

https://news.ycombinator.com/item?id=5496740


"some measure of support" like a way of interacting with the outside world. The pure SKI calculus is Turing complete, but no amount of library code (also written in the pure SKI calculus) or syntactic sugar for expressing your favorite paradigm will let you write cat(1) in it.


If you're soft real-time so that performance is a correctness issue you need more than Turing-completeness. I don't know off the top of my head if there's a name for a constant-slowdown universal computer or if such a thing is possible.


It's always cool to see an accessible demo of the lambda calculus, but to nitpick... Isn't it cheating a bit to say this doesn't use Object when, in JavaScript, the persistent arguments object exists and is even explicitly accessible? You're just hiding Object instantiation behind function calls, and using syntactic sugar to access the local object.

I'm not totally sold that objects are a "bigger" language feature than closures, conceptually.


This celebrated discussion goes into some nice detail about object-closure equivalence:

http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/m...

If you're too impatient, here's the punchline:

  The venerable master Qc Na was walking with his student, Anton.  Hoping to
  prompt the master into a discussion, Anton said "Master, I have heard that
  objects are a very good thing - is this true?"  Qc Na looked pityingly at
  his student and replied, "Foolish pupil - objects are merely a poor man's
  closures."

    Chastised, Anton took his leave from his master and returned to his cell,
  intent on studying closures.  He carefully read the entire "Lambda: The
  Ultimate..." series of papers and its cousins, and implemented a small
  Scheme interpreter with a closure-based object system.  He learned much, and
  looked forward to informing his master of his progress.

    On his next walk with Qc Na, Anton attempted to impress his master by
  saying "Master, I have diligently studied the matter, and now understand
  that objects are truly a poor man's closures."  Qc Na responded by hitting
  Anton with his stick, saying "When will you learn? Closures are a poor man's
  object."  At that moment, Anton became enlightened.


That is a much more eloquent version of what I was getting at, thank you :) To be clear, I don't really think closures are "bigger" conceptually either. I can't properly recall a time I understood one and not the other... It may be that understanding of both came simultaneously, as a humble student in a single moment of enlightenment.


I'm having to reach back into the dustbin of my mind, but I seem to recall implementing an object system using only closures at one point. As I recall, it had all of the "normal" features of objects - inheritance, member variables, methods, etc. It was done with closures and the members were accessed in a message-passing style (in scheme, (myobject 'show) for example).

Based on that, I think that closures and objects are probably equivalent in their expressive ability. Could be wrong though, like I said it was a long time ago.


I'm pretty sure this is an extended discussion/exercise in SICP.


Inheritance can be a bit tricky to fit in, as I recall, but it's quite elegant to implement basic classes and objects with just closures.


A very simple and elegant way to handle inheritance is to give each object a pointer to another object -- let's call it "prototype" -- and delegate any failed member variable or method lookups to the prototype. This is how JavaScript's object system works.


It's better to think of map without thinking of looping. I think its a good idea to think of it as converting a container of A to a container of B. It doesn't matter how it happens, and the container might be empty, or only big enough only to hold 1 item. Loop is beside the point.


Seriously, what I need in a programming language depends on what I intend to achieve with it.

I use blocks in Objective-C and "closures" in Applescript.

    to lambda(aStatment)
    	script intern
    		say aStatment
    	end script
    end lambda
    set a to lambda("applescript…")
    set b to lambda("knows how to …")
    set c to lambda("make a closure if not a lambda…")
    tell a to run
    tell b to run
    tell c to run


Solid intro into the lambda calculus. Wasn't so into the js examples, but after reading the other comments, I can totally see how a functional/mathematical syntax would have been very disorienting for someone not familiar with the lambda calc. Since those people are obviously the target audience, js seems fitting.


Having used scheme before, and gone through some of SICP, it was neat to see the topic expressed in JS. I think it came through fine, though I'll always have a love for Scheme!


I'm a bit turned off by the syntax, but lambda calculus is always fun


great read, thanks for sharing.


I get that JavaScript is a popular language, but burying a fundamental concept under a cluttered and confusing syntax like JavaScript, when it's much cleaner to explain the math using sensibly notation.

Ugh. It's nice that Steve is reaching down to his audience, but it would be nice for the audience to step out of their muddy sandbox once in a while.


I have a degree in mathematics from the University of Chicago. Want to talk about self-adjoint algebras of compactly supported continuous functions? I'm your man.

I also think teaching this using mathematical notation would be a mistake. People only learn things when they relate in part to something they already know. The original lambda calculus syntax is not only abstract, it was invented before programmable computers existed.

Unless someone is either already comfortable with mathematical notation or has a background in a programming language with similar concepts (Lisp, Haskell, etc.), it's going to be a stretch to teach someone the lambda calculus -- really teach them -- unless it's framed in terms of a language they already know.

JavaScript is a great language to use for this because it has C-like syntax, which virtually all programmers understand, and first-class functions. You could use Ruby, too, but the syntax would be even more opaque.

Here's someone building the lambda calculus using only Procs in Ruby: http://codon.com/programming-with-nothing

Then using Ruby 1.9's "stabbby lambda syntax" you get

    -> x { -> f { f[x] } }
Not sure that's any better. The key is explaining it in terms of something the ready already understands well.

If they're not already thinking mathematically a straight-forward introduction to the lambda calculus would be ineffective.


Programming with nothing is such an awe-inducing article. There was a conference talk by Jim Weinrich (http://www.confreaks.com/videos/1287-rubyconf2012-y-not-adve...) using the same basic concepts. A nice-to-see for all Ruby/FP enthusiasts out there.


Nice to see a fellow Maroon on here!


The point is not that you can represent it better. The point is that you can actually run it. You can't run the original notation (as such, we know languages that are close, of course).

I agree that the terser notation is better if you're just considering the mathematics in isolation. However, I also think that the true power of this kind of mathematics isn't made evident to people until you actually add some integers and make some lists.... from "nothing"... in a language you already know.

It's the same reason he doesn't use lisp or haskell, which are, obviously, vastly more appropriate. The idea is to illustrate something using something people have readily at hand. Once you have the concrete representation of the idea, you can work backwards to whatever notation you desire and forwards again using that notation, understanding that, yes, it has a basis in reality.

It's the same reason people draw 2d graphs to represent machine learning concepts instead of solely relying upon linear algebra notation. Humans are limited in that way: we have to gain some kind of appropriate intuition, and then we can work with the abstractions after that. No one is going to argue that js is an appropriate way to develop the mathematics -- it's just a way to make it more concrete for people, that's all.


Agreed. I can't understand how something like

    function(x) {return function(f) {return f(x);};}
can be considered easier to comprehend than

    λx. λf. f x


The grouping of expressions is clearer to many readers in the JS example. Unless you've spent a decent amount of time grokking the lambda calculus, the second line could be any of:

    (λx. (λf. f)) x
    (λx. (λf. f) x)
    (λx. (λf. f x))
The lambda calculus notation is weird. Using "." to separate parameter and body is unusual. The scope of the body isn't obvious.

JS is painfully verbose, but it's a verbose syntax many people have already put the time in to internalize.


Put the brackets in the lambda syntax then. It would be unambiguous and still clearer than the Javascript syntax.


In current Firefox nightlies (and soon everywhere (i.e. Firefox AND Chrome)) it's just:

    x => f => f(x)
http://wiki.ecmascript.org/doku.php?id=harmony:arrow_functio...


Not everyone is used to read that math notation. Is like say: "Why put it in JS instead of assembler?". JS is probably the most universal language out there (at least in terms of widespread).

I prefer this stuff in python. Less syntax, more clear. But then, is the same thing: You imagine your (math) syntax is better, I think is python, somebody will complain why not haskell, but js is more practical. Or, is just the one the OP like. That is probably the only and best reason.


> Not everyone is used to read that math notation.

Agreed, but it doesn't take more than a couple of sentences to describe the entire syntax of the λ-calculus. It's not hard to pick it up even if you've never encountered it before.

Contrast that with JavaScript, where even as someone who has written my fair share of it, my eyes tend to glaze over when they hit a string of `});}))());};`. Yet people happily write that sort of thing, then turn around and complain about nested parentheses in Lisp!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: