I like the article's discussion of algebraic operations and computer representation as well as how to implement the lambda operation. That's all abstract algebra and its extensions.
What I'm still mystified by when confronted by these discussions is the way the lambda calculus is considered equivalent to a Turing machine. Is it essentially that you have these complex variable substitution schemes which can "encode" a Turing machines' tape and transitions or is there something more straightforward?
Philip Wadler provides an in depth discussion on the Curry–Howard correspondence [0] in his paper Propositions as Types [1], and he has given several talks under the same name that are on YouTube.
The talk Prof Wadler gave at Strange Loop is the most cited, but the talk he gave at Stanford [2] is the most interesting because Carl Hewitt was in the audience and was challenging him on the merits and long-term viability of functional programming in general vs Hewitt's Actor Model [3] for scalable concurrent computing.
It is exactly that: you can simulate a Turing machine using lambda calculus and vice versa.
But the variable substitution schemes you mention are not complex at all. The main rule is beta-reduction, which is really simple: just replace all appearances of the parameter with the value of the argument.
It's designed as a silly game for kids, but I still think Alligator Eggs is a good example of how fundamentally simple the Lambda Calculus substitution rules are (especially for people that want a more visual/metaphorical representation):
I guess the tape is just a linked list of bits? What confuses me about lambda calculus, and a lot of math in general, I guess, as opposed to something like the turing machine, is that, how do I put this...
To actually implement Lisp or lambda calculus in real life you need "cons", right(?), and some way to increment your equivalent to an "instruction pointer". Or else you don't have any memory or information to operate on and you don't have any way to move on to the next step. If you are doing it on paper I guess the act of moving your pen to an empty space is your cons and moving your eyes across the page is like incrementing your instruction pointer... In the idea of the turing machine this is all explicitly explained but every time I try and read about lambda calculus it seems like they just expect you to know it right away. This might seem really trivial to other people but its how I tend to think.
Lambda calculus (LC) isn't Lisp. LC does not represent its own notation as an object manipulated by lambda calculus and has nothing like a quote operator; to implement/simulate LC calculus, you don't require any specific data structure like a cons cell; there is nothing in LC to dictate it. You will need some sort of parser for a notation, and some run-time representation of functions (objects corresponding to the lambda terms) plus environments for the named parameters. Speaking of which, those names are not required to be anything like Lisp interned symbols.
I don't know if this helps, but the way I see lambda calculus works is that you assemble one expression in which you build all the things you need and then evaluate it to yield a result. After that, everything vanishes from existence.
For example, here I'm building cons, car and cdr just to create a list, from which I'll get its second value:
(cons => car => cdr =>
// this block is my little world where I have lists
car(cdr(cons(1)(cons(2)(cons(3)(4)))))
)( // cons
carValue => cdrValue => extractor => extractor(carValue)(cdrValue)
)( // car
consValue => (consValue)(x => y => x)
)( // cdr
consValue => (consValue)(x => y => y)
)
(This is JavaScript-looking lambda calculus, so you can evaluate it in your browser's developer console).
Lambda calculus is simple, mechanical symbol substitution. There’s no information or memory needed outside of the initial ‘program’ other than the substitution rules.
Read about church encodings to see how numbers, pairs, conditionals, etc are encoded.
I doubt it; substitution will blow up on self-referential tricks like the Y combinator, where a function receives itself as an argument: any attempt to substitute the raw lambda text for this argument leads to infinite regress (à la Droste effect).
That's a failure to reduce, showing that the reduction by substitution is an incomplete evaluation strategy at best. The Y combinator works under call-by-value and other strategies; you can stick it in production code, if you're so inclined.
This is neat, but doesn't it beg the question a little bit? You've used a conditional here in your definition. Can you define cond using only lambdas? (My apologies if the linked article does this, I haven't made it all the way through yet.)
What I'm still mystified by when confronted by these discussions is the way the lambda calculus is considered equivalent to a Turing machine. Is it essentially that you have these complex variable substitution schemes which can "encode" a Turing machines' tape and transitions or is there something more straightforward?