-- https://hackage.haskell.org/package/base/docs/Data-Function.html#v:fix
-- https://stackoverflow.com/questions/53715841/sharing-vs-non-sharing-fixed-point-combinator
fix :: (a -> a) -> a
fix f = let x = f x in x
Y combinator (type level)
type Fix :: (Type -> Type) -> Type
newtype Fix f = Fix (f (Fix f))
Much of the point of the Y combinator is the construction of a fixed point operator without appealing to self-definition. Using fix in its own definition violates that, one could say that the recursion arises due to Haskell's recursive bindings.
The standard definition in untyped lambda calculus is
\f -> (\x -> f (x x)) (\x -> f (x x))
but if we try to give a type to x, let's call it X, we'll see something funny
X = i -> o -- we know it's a function type because it's applied
X = X -> o -- it's self-applied, so the input must be X
X = (X -> o) -> o -- expanding the inner reference
X = ((X -> o) -> o) -> o -- oh no
Unfortunately, this won't type in Haskell because `type X = (X -> o) -> o` is invalid and would loop the type checker. We must introduce an explicit indirection. This explicitness forces us to control if and when this type expands and prevents the checker from looping.
newtype Loop a = Loop (Loop a -> a)
defer :: (Loop a -> a) -> Loop a
defer f = Loop f
apply :: Loop a -> (Loop a -> a)
apply (Loop f) = f
This type is exactly a solution to `X = (X -> o) -> o` but with the recursion explicitly tagged, as
x :: Loop o
apply x :: Loop o -> o
apply x . defer :: (Loop o -> o) -> o
So now we can type the type Y combinator without utilizing Haskell's self-referential bindings.
y f :: (a -> a) -> a
y f = apply half half
where
half :: Loop a -- the type of X
half x = defer (f (apply x x)) -- i.e. \x -> f (x x)
Under Haskell's evaluation strategy `(fix f)` will not be evaluated until it is used, if the definition of `f` ignores its parameter under some scenario, then evaluation will terminate.
Fun! Reminds me of a little tangent I went down when I realized the untyped lambda calculus could be embedded within the single Go type `type V func(v V) V`, including a Y Combinator, church numerals and factorial function with no values other than instances of type V.
I admire how someone passionate about a subject can provide in-depth explanation of _how_ that thing works. But to me, I still have absolutely no idea _why_ or _when_ I would need to apply this (even after reading the original post[0]). A motivating practical example would go a long way as an introduction before getting into the nitty-gritty details.
The problem is marrying two ideas: lambda calculus (where all computation is done implicitly via anonymous functions), and recursion (where some set of functions call each other by name in a cycle).
This is a bit more interesting than it looks because ostensibly you're using the name of a function in its definition before that name ever has a meaning -- you're not really saying that you should literally call that symbol, but that the compiler should produce a function according to some set of rules that fills in the self references.
That doesn't work in lambda calculus though, where definitions are concrete and you don't have compiler magic to resolve the undefined symbol in your definition. A solution is to pass some function to itself as an argument, and you have a code pattern that goes along with it (the Y combinator).
This retains some "usefulness" in the sense that lambda calculus is still being explored for various purposes as a foundation for other things, and also in that languages based on that line of thought might benefit from using that structure explicitly. If you have some sort of syntactic sugar for recursion in your language of choice though and don't care about foundations then it's probably not very applicable.
Thanks, that helps a bit. So I get the what: a way to make an anonymous function recursive. As for the why, it seems to me it's because "lambda calculus doesn't have named functions." So this tells me this is a programming language design concern, as opposed to a pattern that can be used in everyday programming. Based on this, providing examples of how to implement a Y combinator in Clojure, Python, and Go is of questionable value IMO. And that's OK, since a lot of things we do don't need to have practical value (e.g. learning, understanding, etc).
It has practical value, let say you have something that caches the result of a lambda
record Cache<T, U>(Function<T, U> function) {
U get(T value) { ... }
}
you can use it that way
var cache = new Cache<Integer, Integer>(v -> v + 1);
var result = cache.get(3);
with that settings, how do you specify a lambda which is recursive ? and how can you cache the intermediary steps if you can change the signature of get() ?
In languages with closures it would be easy to set a variable to the lambda while that same variable is also captured by the lambda. In fact, this seems to happen once in a while in Go, since closures are always lambdas.
Even if it doesn't have value for everyday programming, translating it into Clojure/Python/Go/whatever can make it easier to understand. Most explanations I've seen of the Y combinator just work in lambda calculus, which I at least find somewhat difficult to read; translating it into a more familiar language can make it easier to understand.
i've read about y-combinator a few times, but i must say your fours paragraphs are the most enlightening i've ever read. You should add them to the wikipedia introduction's in the ycombinator page.
The thing is that you don't want to use directly the Y-combinator, ever.
It is a tool to show that untyped lambda calculus with just anonymous functions and applications is already Turing-complete, because you can write the `Y-combinator`, thus you can define recursive functions, and thus you can write `while` loops.
In a way, the Y-combinator is one of the possible ways to reach Turing-completeness by accident. It is an important example to keep in mind if you want to design a type system for a programming language that is not Turing-complete.
But as soon as you want Turing-completeness, the Y-combinator is not a nice primitive for most users. In this case, either your language is flexible enough to write a good library for recursive definitions based on the Y-combinator. Otherwise, it often works better to introduce recursive definitions as a primitive, even if theoretically this primitive is not absolutely needed.
the Y combinator implements simple recursion. In the lambda calculus it is not possible to refer to the definition of a function in a function body. Recursion may only be achieved by passing in a function as a parameter. The Y combinator demonstrates this style of programming.
You don't really need it in modern programming languages that natively support named functions and recursion, but it's an interesting part of the underlying theory of lambda calculus.
i've read this article multiple times, and i have to say it is absolute garbage. This general layman description ( which i never noticed before) should really be at the beginning and not buried deep after pages of lambda calculus expressions..
The Y combinator lets you implement recursion without using recursion. For example, a Javascript closure can't refer to itself. But if we define the (strict) Y combinator using JS closures:
let Y = f => (x => x(x))(x => f(z => x(x)(z)))
then we can call it, again only using closures, and define a function that sums the numbers from 1 to N by "calling itself":
let sumToN = Y(sumToN => n => n == 0 ? 0 : n + sumToN(n-1))
sumToN(4)
// -> 10
In practice, no one actually implements recursion this way because it would be slow. So instead it's more of a statement about the expressiveness of languages, and how "we have recursion!" doesn't actually give you any power you didn't have before, if you had functions.
Knowledge of the Y-combinator helps avoid making your language divergent (sometimes desirable), disallowing explicit recursion is not enough! The typed lambda calculus uses static types to prevent the Y-combinator and give a sound logic for proofs.
Static types doesn't seem sufficient? Typed languages have no problem expressing passing functions as parameters. Where does it break when you apply static types?
I think it's because the y-combinator needs to be passed to itself as an argument, and it's hard to come up with a static type for a function where one of the parameters is the type itself.
The same question could be posed for many other mathematical and theoretical computer science constructs. It's a mathematical building block, a node in the graph that is the world of mathematics, and it's possible it plays a part in some advance if you were working in that field.
Look at eg the Curry's Paradox part on the WP page for a taste.
Nice! Once you read the explanation, the explicit types make this more understandable and usable than the more usual definition. It seems no more difficult than constructing a Promise from a callback in JavaScript.
(It's still pretty opaque without reading the explanation, though.)