Some of these points are true of currying in general -- IMO most of the time you can't tell what's a function call and what's a curry (Haskell people would say "its the same thing!"). However, separating these concepts leads to a lot of super interesting results. I wrote about this in this blog post: http://tolmasky.com/2016/03/24/generalizing-jsx/ . I argue that the JSX syntax makes for an awesome syntax for currying.
Haskell can get away with this because it is a pure language. If you have `f a b`, you do not really care when f gets evaluated (or, at least, when you do care currying is generally not what makes figuring it out difficult). Even without currying, Haskell makes it difficult to know when a function is evaluated.
It can be idiomatic in Javascript if you choose a functional style.
I think Javascript has become a kitchen-sink language. It is, as you say, a language that affords many styles. If you pick a sub-set of the language that is functional you can see that currying is quite idiomatic in that context.
That being said I think library support is a necessity when choosing a more functional approach in Javascript as the "core" experience in JS caters to an imperative, C-derived language (as per many of the examples). If you work with Ramda you can get most of what you need today to make working with currying, and it's compositional capabilities, pleasant.
Consider R.curryN:
const add = R.curryN(2, (a, b) => a + b)
You can now call this:
add(1)(2)
add(1)
add(1, 2)
And you have options in terms of order of parameters:
const sillyAdd2 = add(R.__, 2) // a silly example
sillyAdd2(3) // => 5
And if you want even more flexibility there is Fantasyland[0] and Sanctuary[1] among others.
Javascript is fertile ground, in my experience, for getting developers to experiment with and adopt functional programming paradigms.
In my experience, currying using helper libraries also makes debugging more difficult, as you have to step though higher order functions and cannot always easily tell how the argument values were calculated. This can be mitigated somewhat and the tools are improving but I generally found that, the more clean I made my JS code using functional programming concepts, the more difficult it was to debug when everything went wrong. And without a type system, I had to debug much more than in a lanague like scala
It is funny, because this is the argument against most use of macros in lisp. To the point that everyone knows that macros make it harder to debug code.
The lost part, is few people actually "debug" in the "interactive debugger" sense of the word anymore, such that many things that were frowned upon in lisp and related languages are now getting a lot of exposure. And I feel it is truly sad how few people know how to step through a program nowdays. (Obviously projecting some. Maybe I'm in an odd corner where nobody uses an interactive debugger, but it has been a long time since I found coworkers that were used to using one. And, in this, I'm including REPL based workflows. Yes, I know they are strictly different. I just feel they are in the same family for this discussion.)
When the abstraction stack gets high enough, stepping turns into a bit of a minefield - if you step into where you should have stepped over, you can get lost in irrelevant weeds. Microsoft and some others have tried to put step barriers in code that prevent accidental step into for this reason ("only my code").
But it's also a change in coding conceptualisation. Personally I understand things from the ground up, from composition of very low level components. I find it hard to reason about a system's performance and failure modes without doing so. But increasingly developers have only a surface area knowledge about the things they use, and instead know a lot in terms of breadth instead of depth. This approach is more effective for building something quickly by duct taping disparate things together, and this is the majority of modern commercial coding. These people don't need debuggers: it gives them too much information. They need examples, and they code by idiom and analogy instead.
Things like event dispatchers and async also greatly diminish the power of the debugger.
Agreed. And I actually fully agree with this being a strong reason not to use macros in lisp. Made a bit more important there, because stepping through a macro involves not just being in other code, but jumping between phases.
That said, my main point is that where many folks used to advise caution, such that instead of promoting lisp, they would create new languages where they could hide some of the magic, it seems that people are gung-ho on many of the higher abstractions now, regardless of the mental costs they bring.
What do you mean few people know how to step through programs nowadays? Is it uncommon to open up dev tools and step through code? I step through code whenever I get lazy or tired and don’t want to think and am using a platform with a nice debugger. I’d think this was very common but I live in a hole.
Majority of my coworkers have not done this. With some of the newer stacks, it is not really possible, yet. And in a distributed world, it is genuinely not feasible, that often.
I think this is an odd one. That the act of stepping through code hella build the skill to mentally simulate stepping through code. Which helps reason about it. I don't have data to back that claim. :(
This opinion comes from my experience developing Khepri ( http://khepri-lang.com ), which tried to rework the JS syntax to better support untyped functional programming. Even had a partial application operator: @
And the language’s implementation was beautiful: declarative, no mutation, and HOFs everywhere. It even used monad transformers. But the nightmare that was debugging also convinced me that I had actually been trying to solve the completely wrong problem
It's not a rigorous idea, but I'm not sure what you might be wanting to read about, exactly. However, given that most of the language features that the article described are relatively new to the language, or are proposed language features, I think the author has a strong claim.
Has anyone read the semantics of the linked partial application proposal [0]?
> Note that this also means that more involved references are captured in their entirety and should be stored in a local variable if they may have unintended side-effects should the partially applied function result be called more than once
const a = [{ c: x => x + 1 }, { c: x => x + 2 }];
let b = 0;
const g = a[b++].c(?);
b; // 0
g(1); // 2
g(1); // 3
b; // 2
`a[b++].c` is not evaluated until the partially applied function `g` is called. Event `a[b++]` and `b++` are not evaluated.
It is unclear to me when `const g = Math.random().toFixed(?)` would evaluate `Math.random()`.
Currying is nice when whitespace is function application and ugly and inconvenient when you have to write foo(x)(y) and the libraries you deal with aren't consistent about this. Currying in Haskell is nice for exactly these reasons.
That only seems to be a minor issue. The author specifically mentions "simple currying" and "overloaded currying". Extra parentheses are only an annoyance with the former.
"Most functional programming languages with automatic currying have syntax where there is no difference between add(1, 2) and add(1)(2)"
and goes on to talk about unidiomatic syntax.
In Haskell, the two options are "add (1,2)" and "add 1 2"
The latter, curried form, involving whitespace, works because Haskell parses it as an application of a to 1, followed by an application of the resulting value to 2. This syntax is inspired by the lambda calculus, so it's not really whitespace that's the concept, just "juxtaposition of terms" implies application.
> In Haskell, the two options are "add (1,2)" and "add 1 2" The latter, curried form, involving whitespace, works because Haskell parses it as an application of a to 1, followed by an application of the resulting value to 2.
This is a bit of a misrepresentation. There is nothing "uncurried" about using these parens here. It's simply making a function call on two parameters take a tuple of two values instead. There's no point to it at all and there's no real value in doing it in your APIs. I have no idea why the Haskell wiki insists on framing it like that.
Using the "curried form" has no effect on how the function call will perform or behave at all. Partially applying functions can have performance implications, though.
Consider, though, the Haskell functions "curry" and "uncurry" which transform functions between these two styles. They are different types, so I don't see how it is a misrepresentation.
>Most functional programming languages with automatic currying have syntax where there is no difference between add(1, 2) and add(1)(2).
This isn't true of any functional programming language I can think of.
What happens in most functional programming languages is that it is idiomatic to make functions curried by defualt, and only make non-curried functions when you have reason to.
For example, in Haskell, you could have either:
add1 :: Int -> Int -> Int
add1 x y = x+y
add2 :: (Int, Int) -> Int
add2(x,y)=x+y
All functions in Haskell are single-argument functions. Multiple arguments are syntactic sugar. The syntax sugar can make it confusing to talk about but if you define an `add` function that takes two arguments x and y:
add x y = x + y
it's sugar for multiple single-argument functions.
add = \x -> \y -> x + y
When examining it this way, the `g` function above would translate to
g = \(x, y) -> 2 * x + y
Which is a function taking a single tuple argument. It is still "default curried" but the argument being passed in is a single argument rather than multiple so we don't expand it to multiple single-arg functions. Perhaps it is more illustrative to show the definition as the single argument it is rather than using haskell's destructuring to pull x and y out of the tuple.
g tuple = 2 * (fst tuple) + (snd tuple)
and a ghci session for completeness:
Prelude> let g (x, y) = 2 * x + y
Prelude> g (1,2)
4
Prelude> let y tuple = 2 * (fst tuple) + (snd tuple)
Prelude> y (1,2)
4
So when we say that Haskell functions are "curried by default", what we're referring to is roughly the underlying single-argument nature of haskell functions.
Those are different functions. Your `g` function takes a single argument, but it's a composite datatype. It's akin to the following Javascript:
function g(args) {
return 2 * args[0] + args[1];
}
You're right that we could write either `f` or `g`, but the reason `f` is the "default" way is that it involves no extra datatypes. Consider the following:
h [x, y] = 2 * x + y
I wouldn't say this is "default behaviour" in the same way as `f` or `g`, but it certainly seems 'closer' to `g`. Similarly:
data Pair a b = MkPair a b
i (MkPair x y) = 2 * x + y
This seems very "non-default", especially since it's using a non-default datatype. Yet that datatype is alpha-equivalent to `(x, y)`.
Whilst tuples (or the above `Pair`) are isomorphic to curried functions (e.g. via the `uncurry` function and its inverse), they're not alpha-equivalent, so there is a meaningful distinction between tuples and curried functions. Since tuples are defined in the Prelude, we could do away with them (and use `MkPair` instead, if we wanted), so they're not really "built in". On the other hand, functions are (AFAIK) built quite strongly into the core of Haskell, and hence we cannot avoid them, making them, and hence the function-returning behaviour of currying, more pervasive, unavoidable and hence "default".
(Note that Haskell may at some point be less tied to functions; e.g. Conal Elliot's 'compiling to categories', among others, looks like a reasonable justification for allowing something like an "OverloadedFunctions" pragma)
Simulating named parameters with object literal destructing in the signature is pretty great IMO. In fact I love it. You can destructure deep into an object. It’s lovely. Kind of adds GC burn I imagine but that’s not where the latency in my programs lie.
While slightly more verbose, I believe this form to be the more powerful of the two because one has access to all of the features of imperative programming and is easier to compose with other objects.
Easier to compose with other objects, but harder to compose with other functions. For example:
const compose = (...f) => f.reduceRight((f, g) => (...x) => f(g(...x)));
const add = x => y => y + x;
const mul = x => y => y * x;
const div = x => y => y / x;
// (6x + 4) / 2
const complexOp = compose(mul(6), add(4), div(2));
// should return 50
complexOp(16);
All joking aside, you would achieve composition the same way, except using interfaces.
interface Operation<T> {
apply(input: T);
}
class Addition implements Operation<Number> {
}
class Multiplication implements Operation<Number> {
}
class Division implements Operation<Number> {
}
function compose(Operation<T>... operation, T input) {
T last = input;
for (Operation<T> op : operations) {
last = op.apply(last);
}
return last;
}
Of course, the above code is slightly verbose, but even if you could remove all the boiler plate it still doesn't look like good imperative code. This is what confuses me about currying: when translated to the imperative equivalent, it looks terrible.
I find currying rarely useable in real life. Because you rarely need a fixed parameter to a function. And when you do it's solved in a better way by storing it in a state somewhere (even global variables will do in a pinch).
The closest real world example I can think of right now is calling a webservice. Which is usually something like
const baseUrl = 'https://some.tld'
function callExternal(resourcePath, data) {
fetch(baseUrl + resourcePath, data)...
}
// and then somewhere
callExternal('/some/path', {some: data})
With currying you could do it like this:
const baseFun = function(baseUrl, resourcePath, data) {
fetch(baseUrl + resourcePath, data)...
}
const callExternal = baseFun('https://some.tld')
// ^ you "pin" the first argument to always be 'https://some.tld'
// and then somewhere
callExternal('/some/path', {some: data})
I personally have had the need to write a curried function myself maybe twice over the course of the past 17 years.
Yes, closures can be used to achieve a similar effect to currying. That said, they only work with functions you create yourself. If baseFun came from a library, you'd have to write an extra function just to close over baseUrl.
Of course, one can wonder how many times can one usefully curry with external functions, but I think that's partially a result of not having currying syntax in the first place. For example, the fact that Ruby has blocks massively influences the typical design of its libraries, when compared to Python.
Think of an array such as int x[5][10]. Here x is a two dim. array. Indexing it just once such as x[i], you get another array, a vector of 10 ints, which can be indexed further. So in a sense x[i] is a "curried" array! In general given an N dim. array, indexing it K (less than N) times you get an array of N-K dimensions.
Now you can think of an array as a restricted function whose domain is 0..n x ... and range is some type. The same concept of "currying" applies to functions. That is, given a function of N arguments, if you provide K (less than N) arguments you get another function of N-K arguments.
It is strange that many programming languages allow currying of arrays but not of functions!
This is also, imo, a terrible example. I've almost never seen a serious application of 2D arrays, unless they were weirdly sparse, that actually used the 2D array notation... Almost always, it's flattened into a memory-efficient representation as a 1D array, with a lookup expression to convert x, y coordinates into an index.
Currying is essentially a trick to turn n-ary functions to unary functions so you can compose them. It also enables the creation of specific functions from generic ones by supplying arguments. Also, as functions can have functions as arguments it is the ultimate dependency injection pattern, if you’re familiar with the oop lingo.
I get the whole hole-in-the-middle functional dependency injection idea... The problem I frequently run into with this class of things is that people preach an unreasonable purity of thought with regard to the application of the technique, rather than just recognizing places it is useful and applying it there. Either one curries nothing, or one curries all the things. Monads are the fundamental fabric of the universe, or they are eldritch monoliths to worship, but not understand. Rarely is there the pragmatism applied to use these concepts where they make sense, and forego them for the simple where they do not.
I think that Clojure takes a sane approach to topics such as this one. I believe the "data first" approach tends to breed a pragmatic outlook when writing code and leaves behind all the proselytizing. Plus currying is built into the language, so there isn't any need for importing any special purpose libraries :)
It's just a function returning another function. Like if you made a function that took x as its argument and returned a function that took y as its argument and added that to X and then you called it like add(2)(3). Or for a real world example check out $interpolate in angular 1.x.
I know all of these things... My brain just does not like the squishiness in most cases, or how silly and contrived most examples that I have seen are.
Here's the simplest example I can come up with, and it's something I do all the time.
I have an array of objects in my state. For this example we'll say they are Todos, and each one has three properties, id, name, and checked. I have a function `checkTodo` with the signature `checkTodo(id: string, event: InputEvent)` that I want to call if one of those Todos is checked.
So say I'm using React. I can use arrow functions like this:
That's okay, but having arrow functions like that in JSX (which you'll have several of in a complex application) can get a bit unwieldy. Currying gives us another option that can be cleaner and avoids having to declare functions inside our render. If we change our method signature to this: `checkTodo = (id: string) => (event: InputEvent)` then we can write our JSX like this:
It's a bit cleaner, and can make certain patterns a lot easier in the long run. For example, say we make a shared <Todo /> component like the one above which can be rendered both in a list, as above, or on it's own. In that case, it would be bad for the method signature that the Todo component expects to be the id and event, as the id wouldn't be necessary when we're not working with an array.
Instead, currying means our Todo component can simply expect a function that takes in an InputEvent, and currying can move the other logic where it's needed.
Does that make sense? If you don't know React I could rewrite this using native DOM calls.
Well, in that case, they are of little more value than the equally contrived OOP examples we grew up with, with a base Fruit class, and Banana and Apple subclasses.
Stupid examples are in many cases worse than no examples at all.
Well I'm afraid I can't agree there. But like I said, the shower interpolate service is a real-world example and the purpose seems clear enough (why evaluate the string each time?)
Edit: Some follow up articles:
1. Using this scheme, you can actually have default parameters in currying: https://runkit.com/tolmasky/default-parameters-with-generic-...
2. An example of how to apply this to Babel syntax trees: http://runkit.com/tolmasky/generic-jsx-for-babel-javascript-...