Hacker News new | past | comments | ask | show | jobs | submit login

I like the article's discussion of algebraic operations and computer representation as well as how to implement the lambda operation. That's all abstract algebra and its extensions.

What I'm still mystified by when confronted by these discussions is the way the lambda calculus is considered equivalent to a Turing machine. Is it essentially that you have these complex variable substitution schemes which can "encode" a Turing machines' tape and transitions or is there something more straightforward?




Philip Wadler provides an in depth discussion on the Curry–Howard correspondence [0] in his paper Propositions as Types [1], and he has given several talks under the same name that are on YouTube.

The talk Prof Wadler gave at Strange Loop is the most cited, but the talk he gave at Stanford [2] is the most interesting because Carl Hewitt was in the audience and was challenging him on the merits and long-term viability of functional programming in general vs Hewitt's Actor Model [3] for scalable concurrent computing.

[0] Curry–Howard correspondence https://en.wikipedia.org/wiki/Curry–Howard_correspondence

[1] Propositions as Types [pdf] http://homepages.inf.ed.ac.uk/wadler/papers/propositions-as-...

[2] Propositions as Types (Stanford) [video] https://www.youtube.com/watch?v=ivzpaiw26wQ Slides [pdf] http://www.inf.ed.ac.uk/teaching/courses/tspl/pat.pdf

[3] Actor Model https://en.wikipedia.org/wiki/Category:Actor_model_(computer...


Philip Wadler also recently discussed this on the excellent programming podcast CoRecursive:

https://corecursive.com/021-gods-programming-language-with-p...


Turing himself proved them equivalent in his 1937 paper "Computability and λ-definability" which I've so far only found in image form:

http://www.turingarchive.org/browse.php/b/11


It is exactly that: you can simulate a Turing machine using lambda calculus and vice versa.

But the variable substitution schemes you mention are not complex at all. The main rule is beta-reduction, which is really simple: just replace all appearances of the parameter with the value of the argument.

     (λ x . (x y) x) m  →  (m y) m
                        β


It's designed as a silly game for kids, but I still think Alligator Eggs is a good example of how fundamentally simple the Lambda Calculus substitution rules are (especially for people that want a more visual/metaphorical representation):

http://worrydream.com/AlligatorEggs/


I guess the tape is just a linked list of bits? What confuses me about lambda calculus, and a lot of math in general, I guess, as opposed to something like the turing machine, is that, how do I put this...

To actually implement Lisp or lambda calculus in real life you need "cons", right(?), and some way to increment your equivalent to an "instruction pointer". Or else you don't have any memory or information to operate on and you don't have any way to move on to the next step. If you are doing it on paper I guess the act of moving your pen to an empty space is your cons and moving your eyes across the page is like incrementing your instruction pointer... In the idea of the turing machine this is all explicitly explained but every time I try and read about lambda calculus it seems like they just expect you to know it right away. This might seem really trivial to other people but its how I tend to think.


Lambda calculus (LC) isn't Lisp. LC does not represent its own notation as an object manipulated by lambda calculus and has nothing like a quote operator; to implement/simulate LC calculus, you don't require any specific data structure like a cons cell; there is nothing in LC to dictate it. You will need some sort of parser for a notation, and some run-time representation of functions (objects corresponding to the lambda terms) plus environments for the named parameters. Speaking of which, those names are not required to be anything like Lisp interned symbols.


I don't know if this helps, but the way I see lambda calculus works is that you assemble one expression in which you build all the things you need and then evaluate it to yield a result. After that, everything vanishes from existence.

For example, here I'm building cons, car and cdr just to create a list, from which I'll get its second value:

    (cons => car => cdr =>
    
        // this block is my little world where I have lists
        car(cdr(cons(1)(cons(2)(cons(3)(4)))))
    
    )(  // cons
        carValue => cdrValue => extractor => extractor(carValue)(cdrValue)
    )(  // car
        consValue => (consValue)(x => y => x)
    )(  // cdr
        consValue => (consValue)(x => y => y)
    )
(This is JavaScript-looking lambda calculus, so you can evaluate it in your browser's developer console).


Lambda calculus is simple, mechanical symbol substitution. There’s no information or memory needed outside of the initial ‘program’ other than the substitution rules.

Read about church encodings to see how numbers, pairs, conditionals, etc are encoded.


I doubt it; substitution will blow up on self-referential tricks like the Y combinator, where a function receives itself as an argument: any attempt to substitute the raw lambda text for this argument leads to infinite regress (à la Droste effect).


It’s still just an infinite series of substitutions.


That's a failure to reduce, showing that the reduction by substitution is an incomplete evaluation strategy at best. The Y combinator works under call-by-value and other strategies; you can stick it in production code, if you're so inclined.


You don't need cons, car, cdr, etc. They can all be implemented using lambda. For example:

  ;; excuse my possibly hybrid notation
  (defun cons (car cdr)
    (lambda (m)
      (if (eq? m 'car) car
          (if (eq? m 'cdr) cdr
              nil))))
  (defun car (obj) (obj 'car))
  (defun cdr (obj) (obj 'cdr))


This is neat, but doesn't it beg the question a little bit? You've used a conditional here in your definition. Can you define cond using only lambdas? (My apologies if the linked article does this, I haven't made it all the way through yet.)


Yes. You encode true as λx.λy.x and false as λx.λy.y and then (if p then a else b) becomes just (p a b).

See https://en.wikipedia.org/wiki/Church_encoding#Church_Boolean.... That page also has encodings of pairs and lists.


Whoa. That's really cool! Thanks for the link.


The tape is probably more easily represented as a lambda sending a number n to the content of the nth cell

    initial_tape = λn. 0
    read tape idx = tape idx
    write tape idx x = λn. if n = idx then x else (tape n)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: