So tuple() for lists takes two lists and gives all combinations with one item from each list.
We could call it product.
For options it takes two options and return an option with a tuple of values if both are set, otherwise it return None. We could call it bothOrNone.
For Either... It's not unambiguous, but presumably it's similar to option, returning either a tuple of Rights, or the first Left of the arguments, following convention of using Left for errors. We could call it bothRightsOrALeft.
For State I'm just guessing. getBothWithStateOfLast?
These may all have interesting mathematical similarities, but the way you would use them in a program would be pretty different. I don't think you are making your program clearer by giving them all the same name.
Sometimes reading "overabstracted" code can feel similar to reading assembly code. Sure, add and mov are simple operations with a clear definition of input and output, just like flatMap and fold, but they say very little about the programmer's intent in using them. If you give me a chain of them without any comment, I'll need to work backwards to figure out what you're actually trying to accomplish.
If you try to make them sound like completely different things, then yeah I suppose they are all different. But when I use tuple, I think "take F[A] and F[B], sequence their effects, then give me (A,B) to work with." That's an extremely common operation when writing referentially-transparent programs. Understanding "sequence their effects" is the prerequisite intuition, but if I can intuit it with no formal teaching or a finished degree at the time, so can you!
By your logic, it's silly and confusing for Option, List, Either, and State to all have a function called map. But in my (and everyone else I work with)'s experience, map feels like map! And it doesn't take a mathematician to feel that way given our backgrounds...
But if you look at Haskell's Control.Monad you see why the abstraction is helpful [0]. We get things like foldM, it's similar to how you can fold across more than just lists, you can fold across trees and so on, but for all these different structures like State, Either, etc. And yes you are right to point out that some of them have multiple implementations, State can be combined "backwards" or "forwards" (or both). The trade off is you have to invest the time so these abstractions become natural, but once you do so you can use generic control structures that save tons of implementation time.
Some might argue that many similarly behaving structures without structural connectivity (also known as, DRY principle) is the very essence of complexity.
Not necessarily. There is often code that works the same way "by coincidence." The simplest example is two constants that just happen to be assigned the same value. In that case, you shouldn't share common code - it's just going to make changes harder. If you modify one constant, you don't want to modify the other.
Having a mathematical concept in common: is that important, or is it a coincidence? Seems like it depends on the domain and maybe on your point of view.
At the end of the day it's about how hard the code is to maintain. Sharing dependencies makes things better if you want to fix a bug in one place. But every dependency has a cost, too, since it's another thing that can break you if it changes.
Too many dependencies and you get something like the "fragile base class" problem where you can't easily change the code, even to fix a bug, because there are so many downstream usages that depend on it working the way it does today.
So, I would not be quick to depend on a common utility library for simple convenience, unless I know how stable and bug-free it is, and that it doesn't have too many dependencies of its own.
Ok, but since I invested time to learn it and can now use it, I can save time every time I do. Perhaps I will pay down my debt eventually, perhaps not. It's a sunk cost and I enjoyed learning about it, so I'm pretty happy.
These may all have interesting mathematical
similarities, but the way you would use them
in a program would be pretty different.
Actually, by definition, how they are used are not different. This is a key concept. While the tuple example was just that, the idea of defining interaction contracts (often called "laws" in FP literature) is powerful. It can reduce conceptual load due to expectations being clearly defined. It also, perhaps more importantly, enables expressing a solution in a robust manner having clearly defined collaborator roles.
I think your concern about communicating programmer intent is keen. The risk is there whether it is a chain of flatMap, LINQ, or fluent interfaces[0]. IMHO, this the same concern with poorly formulated imperative logic. Having to work backwards to figure things out is orthogonal to the approach taken and IME directly correlated with the attention paid to non-executable artifacts (such as comments, diagrams, discussions, etc.).
Interesting article and funny to see the shout out to Elm. I think the first few code samples made it very clear that Elm is missing something, at least for this type of generic programming, namely higher order types. Too many times I had to re-implement something that could have been solved in a generic fashion. Kind of similar to how Go programmers have to reimplement instead of generalise.
It is actively discussed in the Elm community (https://github.com/elm-lang/elm-compiler/issues/1039), but with a Wait-And-See-Approach. In my opinion, this is really laudable, as it signals readiness to add higher order types, and at the same time keeps developer friendliness of the resulting feature extension in mind.
What I wish to see in these blog posts are real-world examples. My day job is coding an MVC app in Rails / Ember. Fine, I won't be able to get past Chapter 11 in a book, but what am I REALLY missing out on? Are these solutions meant for a different problem set, or would I be able to use these concepts in my day-to-day (assuming we were written in a Scala, Haskell, etc.)?
Here's a common example. Some systems provide blocking operations: e.g. read_line waits for a line of text and then returns it as a string. Others use promises: read_line returns a promise of a string.
What if you want to write a library that works with either blocking calls or asynchronous promises?
The cohttp HTTP library is written that way. For example, the Transfer_io module[1] (supporting both chunking and non-chunking HTTP transfers) takes as an argument another module, IO[2], that provides a read_line function of type "ic -> string option t", where the type t is abstract and higher-kinded (and "ic" = input channel). You can instantiate the module with a blocking IO module (where a "string t" is just the same as a "string" and read_line blocks) or with a non-blocking one (where a "string t" is a promise for a "t" and read_line returns a promise).
You don't need higher-kinded types for that. Just always return a promise, except the blocking version always return a promise that is already fulfilled.
This is just a variation of 'any problem can be solved by adding a level of indirection' + 'any protocol can have a dummy implementation'.
One can almost always respond to a particular use-case with "well you could just do this instead." But the point is that there is a broader problem which type classes solve, which is to identify similar behavior and encode it in abstraction, such that your code becomes more generic and reusable. It's a powerful mechanism which leads to beautiful, performant and expressive code. You clearly don't need higher kinded types in order to write programs, much like any number of other features one doesn't need, but they can provide a huge boost.
> What if you want to write a library that
works with either blocking calls or
asynchronous promises?
You don't need higher-kinded types for that.
Just always return a promise, except the
blocking version always return a promise that
is already fulfilled.
While strictly speaking this approach will work, a benefit of employing higher-kinded types (HTK's) to express a solution is being able to optimize without having to alter the solution.
For this example, the Scalaz Id[0] type provides this adaptation. Since it conforms to what is expected of a container, yet does not require the overhead of fabricating one just to satisfy expectations, it affords HKT-based logic to be useful while automatically selecting the optimal implementation.
In short, working in an environment which supports HKT's often allows implementations to reduce needless overhead, simplify implementations (by only having to address "happy path" logic), and promotes stability in a code base due to formally expressing the expectations of collaborators.
The problem is that by doing this, you have made an operation that should be instantaneous - readLineFuture: () => Future[Line] - into something that now has built in lag.
I.e. you've pulled the side effects (blocking) from the future to the present.
That built in lag is so incredibly miniscule in the grand scheme of programming that 99.9999% of times it will not matter, and if it does you are using C or Assembly anyways.
No, readLine will block until a line is available.
Consider a command line IM app using pipes to communicate to the server (just to roll with the readLine example). User 1 has not typed anything, so their file is empty. User 2 has typed a bunch of stuff.
Boost.ASIO (and the corresponding C++ network TR proposal) uses HKTs to select the wait strategy (i.e. coroutines, futures, plain callbacks or whatever the user prefers) with minimal overhead.
No, it matters. Inefficiency adds up, and CPU cycles you waste on useless overhead aren't available to spend on something that actually matters -- like Garbage Collection.
If you instantiate `t` as a promise then you have to always consider that it isn't there yet, even if you're saying that it's going to be. If you instantiate `t` as `Identity` [0] then you have a type level guarantee that your value is there. You have a guarantee that your program blocks when you say it will.
And that's the point of a lot of these abstractions. It's not about being able to write something that you couldn't in another language. After all, we could create the same functionality in assembly.
Are you familiar with C# or Visual Basic? LINQ [1] is a tragic example of what happens when a host language does not support higher-kinded types - every time you use a LINQ method on a concrete type that may have important semantics, its (static) type gets downgraded to a plain "IEnumerable<X>". This can cause major bugs and headaches when trying (intentionally or inadvertently) to write code to abstract over various combinations of LINQ flavors such as "LINQ to Objects" and, say, "LINQ to Entities/SQL".
The LINQ abstraction is leaky, half-baked, and not as safe as it should be. Some of the various "LINQ to X" flavors are actually unable to support certain operations (cf. [2]), so your software can unexpectedly fail at runtime.
If C# and Visual Basic had support for higher-kinded types, then your LINQ-equipped thing wouldn't have to be punted back to an IEnumerable - it could retain its (static) type after applying LINQ methods.
I assume by concrete type, you mean the actual run-time manifested type of an object. If that's true, you're slightly wrong. Most linq methods have parallel definitions for IQueryable<T>. (For those that don't, IQueryable<T> also extends IEnumerable<T>, so that's always a fallback.)
In practice this means that not all linq methods return IEnumerable<T>. x.Where(...) may return a statically typed IEnumerable<T> or an IQueryable<T> depending on whether x was IQueryable<T>.
Edit: I don't think linq would even actually improve by having this kind of fancy stuff. I don't want .Where() called on an array to return another array. I want a lazy IEnumerable<T> almost every time. I might want to consume only the first 10 elements from the filtered result. Allocating a whole 100000 array could be a tremendous waste of time.
> I don't want .Where() called on an array to return another array. I want a lazy IEnumerable<T> almost every time.
Yes, good point. There is a lot of space to explore in collections API designs, and the LINQ design is just one of them that makes its own tradeoffs (e.g., only have to implement "GetEnumerator()" to get a watered-down monadic experience). Contrast that with the well-meaning but convoluted and buggy Scala collections API (very difficult for novices to create a custom collection due to CanBuildFrom and other tricky stuff).
Paul Phillips came up with a pretty neat collections API [1, 2] that provides a nice feature set, including high-performance and typesafe views when you don't really want to wastefully create a temporary collection just to interact with the first 10 elements.
Thanks - you are right about the IQueryable/IEnumerable distinction - it's been a long time since I worked with C#. I think it is somewhat unfortunate that IQueryable is a subtype of IEnumerable because the semantics are so different and one can be so easily coerced into the other.
For an alternative example of limitations induced by lack of higher-kinded type abstraction, imagine implementing an Option/Maybe type [1] in C# and piggybacking on LINQ. Under the LINQ API, if you try to do anything with an instance of your Option/Maybe, its static type gets obliterated into an IEnumerable, which destroys the semantics. (The fact that the thing is an Option/Maybe and not, say, a list of things, is important for reasoning about, building, and abstracting over things.)
Rather than a tragedy, I think linq is quite brilliant. As for writing a linq-query compatible option implementation, I think you're wrong. I've written one. You can play with it here. https://dotnetfiddle.net/2hzRtE No IEnumerable<> to be seen.
Incidentally, I think I might have just now understood what a monad even is. All those years of reading blog posts are finally starting to pay off!
Nice Maybe implementation! The thing that you miss out on with that particular style of implementation, though, is that you don't get certain higher-order functions for free. For example, if you were to endow your Maybe implementation with a "GetEnumerator" method, then you would get all of the other LINQ methods for free (at the expense for losing your "Maybe"-ness as its static type is transmogrified into a vanilla IEnumerable). With higher-kinded types, you could potentially gain a whole host of methods for free that would, in contrast to LINQ, return a Maybe<T> instead of just an IEnumerable<T>, but you would still just need to provide a small handful of manually-implemented methods (e.g., Select, SelectMany, Where, etc.).
> Under the LINQ API, if you try to do anything with an instance of your Option/Maybe, its static type gets obliterated into an IEnumerable, which destroys the semantics.
This isn't true. It's only true if you derive your Option type from IEnumerable, but you can provide your own implementation of Select, SelectMany, and Where to maintain the Option type - this is my implementation [1], with SelectMany returning Option [2]. Granted, it doesn't make it a higher-kinded type, but you don't have to use IEnumerable to use LINQ.
I must not have been clear about what I meant. What I mean is that, if you do it your way and just implement Select and SelectMany, that's all you get - you don't get any of these 143 methods [1] for free.
Higher-kinded type support would improve the ability to abstract over these kinds of things and allow API designers to get more for free without repeating oneself time and time again.
> What I mean is that, if you do it your way and just implement Select and SelectMany, that's all you get - you don't get any of these 143 methods [1] for fre
I agree with your initial statement about LINQ, it is indeed true that implementing Select only allows your type to work with the LINQ grammar, it doesn't magically make an instance of a functor type-class.
But I do disagree that implementing Select and SelectMany should by default give you access to the 143 methods. For example Aggregate (Fold) can't be implemented in terms of Select and SelectMany. There should be a Foldable type-class (interface), a Monad type-class, a Functor type-class, etc. And if you want your new type (say Option) to be a member of those type-classes then you should provide an instance (like Haskell) that implements the interface of the type-class.
That's what I'm in the process of doing with the language-ext project, because it is possible (and I guess even MS missed that opportunity when they developed LINQ, because it seems that not only is it possible, it is fast and efficient with no special rules in the compiler for Select, SelectMany, Where).
> If C# and Visual Basic had support for higher-kinded types
Although it doesn't have higher-kinded types it can do ad-hoc polymorphism which will get you most of the way there. The primary problem is C#'s terrible type-inference, which means type annotations everywhere.
I am currently migrating the types in my language-ext project [1] (a functional framework for C#) to support this. Take a look at this [2] issue raised for a discussion on ad-hoc polymorphism in C#. There are lots of links to examples further down.
Still it was a way to bring FP concepts into mainstream.
On my area of work, even alternative JVM and CLR languages are no go, so I really appreciate having at least the ability to use some FP concepts, even if the end solution isn't perfect.
> My day job is coding an MVC app in Rails / Ember. Fine, I won't be able to get past Chapter 11 in a book, but what am I REALLY missing out on?
Being focused on Ruby (or any dynamic language) makes the value harder to see, perhaps, since a lot of what HKT get you in a static language you don't need in a dynamic language -- its stuff you lose going from a dynamic language to a static language with an insufficiently powerful type system in exchange for typesafety. HKT let you have your cake and it eat it too (that is, lets you keep powerful abstractions that operate on classes of related types, while remaining typesafe.)
First of all, a Ruby developer would need to be convinced of the value of static types, in his/her domain. For the maintenence of large complex systems, they are invaluable, for web development perhaps less so. Advanced type systems simply allow one to statically verify more invariants, or in the case of higher-kinded types, allow one to make more sophisticated abstractions and reduce boilerplate. It's important to remember that every statically typed language is capable of being dynamically typed also. I can use string-keyed dictionaries and runtime tagged types in Haskell if I ever need to. What you loose is ease-of-use, which I guess is why Ruby is more popular than Haskell.
After a while using Haskell, I'm not sure about that anymore. I keep thinking that an "easier than rails" web framework is just one breakthrough by some random developer somewhere on the Internet away.
Even if I can't imagine the form such library would take, the features I see every day are so powerful that it just look possible.
I really hope that higher-kinded types can be brought to TypeScript. I really want that kind of power on the front-end, but I can't count on getting an entire team up to speed with PureScript.
If TypeScript had these features would it not be as hard to get up to speed with as PureScript? Thus why not just get up to speed with PureScript. I believe in you and your team :)
I'm really confused is he saying that fsharp can't have functional combinators? That seems incorrect. Also the title is just generally abrasive. I could just as well say scala doesn't natively support dependent types, so clearly it's between F* and giving up.
Scala supports dependent types like how F# supports higher kinded types. Kind of, some of the time, but you might be able to find a library which shims full support.
For options it takes two options and return an option with a tuple of values if both are set, otherwise it return None. We could call it bothOrNone.
For Either... It's not unambiguous, but presumably it's similar to option, returning either a tuple of Rights, or the first Left of the arguments, following convention of using Left for errors. We could call it bothRightsOrALeft.
For State I'm just guessing. getBothWithStateOfLast?
These may all have interesting mathematical similarities, but the way you would use them in a program would be pretty different. I don't think you are making your program clearer by giving them all the same name.
Sometimes reading "overabstracted" code can feel similar to reading assembly code. Sure, add and mov are simple operations with a clear definition of input and output, just like flatMap and fold, but they say very little about the programmer's intent in using them. If you give me a chain of them without any comment, I'll need to work backwards to figure out what you're actually trying to accomplish.