I was looking at the references at the bottom and saw `6`, I thought "Hygienic Macro Expansion? M.Fellinson? He probably must be referring to Matthias Felleisen...". Effectively: [1]
Doesn't mean I understood the rest of the paper... sigh :-p
I like the article's discussion of algebraic operations and computer representation as well as how to implement the lambda operation. That's all abstract algebra and its extensions.
What I'm still mystified by when confronted by these discussions is the way the lambda calculus is considered equivalent to a Turing machine. Is it essentially that you have these complex variable substitution schemes which can "encode" a Turing machines' tape and transitions or is there something more straightforward?
Philip Wadler provides an in depth discussion on the Curry–Howard correspondence [0] in his paper Propositions as Types [1], and he has given several talks under the same name that are on YouTube.
The talk Prof Wadler gave at Strange Loop is the most cited, but the talk he gave at Stanford [2] is the most interesting because Carl Hewitt was in the audience and was challenging him on the merits and long-term viability of functional programming in general vs Hewitt's Actor Model [3] for scalable concurrent computing.
It is exactly that: you can simulate a Turing machine using lambda calculus and vice versa.
But the variable substitution schemes you mention are not complex at all. The main rule is beta-reduction, which is really simple: just replace all appearances of the parameter with the value of the argument.
It's designed as a silly game for kids, but I still think Alligator Eggs is a good example of how fundamentally simple the Lambda Calculus substitution rules are (especially for people that want a more visual/metaphorical representation):
I guess the tape is just a linked list of bits? What confuses me about lambda calculus, and a lot of math in general, I guess, as opposed to something like the turing machine, is that, how do I put this...
To actually implement Lisp or lambda calculus in real life you need "cons", right(?), and some way to increment your equivalent to an "instruction pointer". Or else you don't have any memory or information to operate on and you don't have any way to move on to the next step. If you are doing it on paper I guess the act of moving your pen to an empty space is your cons and moving your eyes across the page is like incrementing your instruction pointer... In the idea of the turing machine this is all explicitly explained but every time I try and read about lambda calculus it seems like they just expect you to know it right away. This might seem really trivial to other people but its how I tend to think.
Lambda calculus (LC) isn't Lisp. LC does not represent its own notation as an object manipulated by lambda calculus and has nothing like a quote operator; to implement/simulate LC calculus, you don't require any specific data structure like a cons cell; there is nothing in LC to dictate it. You will need some sort of parser for a notation, and some run-time representation of functions (objects corresponding to the lambda terms) plus environments for the named parameters. Speaking of which, those names are not required to be anything like Lisp interned symbols.
I don't know if this helps, but the way I see lambda calculus works is that you assemble one expression in which you build all the things you need and then evaluate it to yield a result. After that, everything vanishes from existence.
For example, here I'm building cons, car and cdr just to create a list, from which I'll get its second value:
(cons => car => cdr =>
// this block is my little world where I have lists
car(cdr(cons(1)(cons(2)(cons(3)(4)))))
)( // cons
carValue => cdrValue => extractor => extractor(carValue)(cdrValue)
)( // car
consValue => (consValue)(x => y => x)
)( // cdr
consValue => (consValue)(x => y => y)
)
(This is JavaScript-looking lambda calculus, so you can evaluate it in your browser's developer console).
Lambda calculus is simple, mechanical symbol substitution. There’s no information or memory needed outside of the initial ‘program’ other than the substitution rules.
Read about church encodings to see how numbers, pairs, conditionals, etc are encoded.
I doubt it; substitution will blow up on self-referential tricks like the Y combinator, where a function receives itself as an argument: any attempt to substitute the raw lambda text for this argument leads to infinite regress (à la Droste effect).
That's a failure to reduce, showing that the reduction by substitution is an incomplete evaluation strategy at best. The Y combinator works under call-by-value and other strategies; you can stick it in production code, if you're so inclined.
This is neat, but doesn't it beg the question a little bit? You've used a conditional here in your definition. Can you define cond using only lambdas? (My apologies if the linked article does this, I haven't made it all the way through yet.)
The significance of this paper is not about learning the λ-calculus.
Rather, to me the significance of this paper is that it presents a novel way of "implementing the lambda calculus in an algebraic system" and provides a correspondence between the λ-calculus and the matrix model of computation:
Vector and matrix valued functions can be represented
by vectors and matrices some of whose entries are
lambda expressions.
I have been looking for a lingua franca for programming languages, a way to unify the langs and make use of the decades of wisdom encoded into our langs' Great Libs. Maybe the matrix model and linear algebra is the lingua franca I seek.
I don`t know how to call this effect, but often I look to learn about something specific and later that day the same topic pops up on the front page of HN. Earlier today I googled about "algebra and lambda calculus" and variations of that and lo and behold this peice on the front page few hours later. Is this some kind of advanced geek ad tracking or what? :D
Carl Jung termed the phenomenon synchronicity [1], "which holds that events are 'meaningful coincidences' if they occur with no causal relationship yet seem to be meaningfully related." Google the philosophy of information, phenomenology, and "the world is made of meaning." Also related is the phenomenon of multiple discovery / simultaneous invention [2]. Ironically, this is in-part what the Curry–Howard correspondence is about, with the Church–Turing thesis being a prime example [3].
My burgeoning conjecture (as hinted at in a previous comment) is that all can be unified, transformed, and optimized for modern hardware under the matrix model of computation -- type theory, the lambda calculus, the actor model, neural nets, graph computing -- I'm starting to see a path to where all models are aligned across vectors of unity.
Here are some the correspondences I've been looking at...
* Graph algos in the lang of linear algebra now realized and encoded into GraphBLAS [1]
* The three normed division algebras are unified under a complex Hilbert space [2]
* Ascent sequences and the bijections discovered between four classes of combinatorial objects [3]
* Dependent Types and Homotopy Type Theory [4]
* Bruhat–Tits buildings, symmetry, and spatial decomposition [5]
* Distributed lattices, topological encodings, and succinct representations [6]
* Zonotopes and Matroids and Minkowski Sums [7]
* Holographic associative memory and entanglement renormalization [8]
Can you expand on what you mean? What is "the matrix model of computation" and "vectors of unity"?
Which actor model are you talking about? The variants are very different and Hewitt's original paper is mainly referenced for coming up with the name rather and kicking off the field than inventing a usable model.
Type theory is even vaster. Are we talking homotopy type theory? Calculus of constructions? System F?
Basically, it's a calculus of constructions that guarantees equivalent funtions get compiled to the same thing. It's really intended as an intermediate form for some higher-level language, and we have Annah as just such an example: