It's pretty easy to reason about that particular example if you have a type system. In a simply typed language, "f f x" is just a type error, so it doesn't do anything. In the polymorphic lambda calculus, the typing will force f to either be the identity, meaning that "f f x = x", or it will be a constant function, so you just replace "f f" with the constant.
Most functional languages parse a b c d e f as a(b, c, d, e, f), it does not matter what b, c, d, e, f are. Do you know any language where this is different?
OCaml parses a b c as ((a b) c). In case the compiler can determine that a is a function taking 2 arguments, it will optimise the code, so that it’s effectively a(b, c). But in general, that’s not possible, especially in the case where the compiler determines that a is a function with a single argument (in which case, it’s return value must be another function, which is in turn called with c) or when a is a first-class function (e.g. passed as an argument)
My toy FP language did. :) It’s perfectly possible to just parse a list of arguments and figure out currying in a later compiler stage. In my experience it even somewhat helps with producing nicer arity-specific error messages, which user might appreciate.
And while different than Algol-descended languages, I don't think that's particularly confusing. (Not that you were saying so, just continuing the conversation.) You can put together a confusing expression with it, but I can put together confusing things with the Algol syntax with not too much effort too. I've got the source code with my name on the blame to prove it.
GP's point is that while yes, we know since `f` has arity 1 there's no ambiguity, in general you might not have the arity of any given function fresh in your head, and therefore can't tell (in Ruby) just from looking at `f f 1` whether it means a single invocation of an arity 2 function, or two invocations of an arity 1 function
xargs is not a counter example. It is not a shell builtin, it is a program that takes a bunch of string arguments like every other program a shell would call.
xargs -0 -n1 bash -c 'mv $1 ${1//.js/.ts}' --
Everything to the right of xargs is a string argument passed to xargs.
I guess the difference is that nesting calls (commands) is much less common in the shell, and (closely related) commands don’t really have a return value.
is it? In practice I find that my shell one liners are orders of magnitude more complex than what I would dare to write in any other 'proper' language:
That's the joke. The number of terms doesn't change, and the last two have the same number of parens. The statements relate quantities of those things as though they're a problem, when in reality the "just right one" only changes the order slightly.
Hence the joke. It's one of those jokes that earns a loud sigh from me, rather than a chuckle.
> the nice thing about (f x) is that the parenthesis group f with x
The drawback is that they are put on the same level, whereas in most people’s minds the function is a fundamentally different thing from the argument(s). The “f(x)” syntax reflects that asymmetry.
The function represents the operation or computation you want to perform. The arguments represent inputs or parameters for that operation or computation.
Of course, theoretically you could also view the function as a parameter of the computation and/or the arguments as specifying an operation (in particular if those are also functions), but for most concrete function invocation that's not generally the mental model. E.g. in "sin(x)" one usually has in mind to compute the sine, and x is the input for that computation. One doesn't think "I want to do x, and `sin` is the input of that operation I want to do". One also doesn't think "I want to do computation, and `sin` and `x` are inputs for that computation". It's why you may have mentally a sine graph ranging over different x values, but you don't imagine an x graph ranging over different functions you could apply to x.
But even in high school topics start to talk about functional equations (calculus, e/ln). I'm not sure the <function> vs <value> doesn't come from the mainstream imperative paradigms and only that.
The distinction isn't between functions and values in general, it's between the function being called and the arguments passed to the function being called. The difference isn't in the things themselves, it's in the role that they play in the specific expression we're reading.
This argument is rather strange. Maybe for people who never interacted with different -fix notations ? Human language binds concepts with arguments in all kinds of direction .. I'd be surprised this is enough to annoy people.
Natural language isn’t precise and a lot is inferred from context. Exact order does often not really matter.
In formal languages, however, you want to be as unambiguous and exact as possible, so it makes sense to use syntax and symbols to emphasize when elements differ in kind.
Incidentally, that’s also why we use syntax highlighting. One could, of course, use syntax highlighting instead of symbols to indicate the difference between function and arguments (between operation and operands), but that would interfere with the use of syntax highlighting for token categories (e.g. for literals of different types).
You're not supposed to look for ending parentheses in Lisp; getting the right number of them is more or less your editor's job, and they are maximally stacked together like this: )))), as much as the given nesting and indentation permit. Given some (let ..., the closing parenthesis could be the third one in some ))))) sequence; you rarely care which one. If it's not matched in that specific ))))) sequence where you expect, then that's a problem.
)))) is like a ground symbol in a schematic:
(+5V
(+10V
(-10V
(INPUT3 ...))))
----- local "ground" for all the above
(different circuit)
That doesn’t seem very user-friendly. ;) Or at least that’s always my perception when programming Lisp.
As a side note, I believe that different people fundamentally have different programming languages that objectively suit them best, due to differences in their respective psychology and way of thinking and perceiving. It’s interesting to discuss trade-offs, but in the end there is no single truth about which language is better overall — it depends on the task and on the person. There’s no “one size fits all”. What’s valuable is to understand why a certain syntax or language might work better for certain people.
It does, but there are still many different purposes. In JS:
( can mean: function call, part of an expression to be evaluated first, regex capture group
{ can mean: scope [of ... loop, if, lambda, function etc.], object definition, string interpolation code i.e. ${..}, class definition
There are probably other things I am not thinking of.
The one that trips me in JS is code like x.map(v => {...v, v2}) breaks because the compiler sees the { as the beginning of a scope, not the object definition I intended.
The working solution is x.map(v => ({...v, v2}))
But x.map(v => v+1) is allowed.
I don't think the compiler could figure out what you meant because an object definition might be a lambda function body. For example { myvar } can be both an object definition, and a function that returns myvar.
I wish we had stats about sexps (we shall call it the kinthey scale). As a kid what you said was the first thing my brain caught on. It compresses the number of things I had to remember it's ~always (idea arguments...). We can now discuss interesting problems.
I had the same feeling when I first started using a lisp (which was Scheme to work through SICP and the Ableman&Sussman MIT course). I was utterly entranced by the simplicity and uniformity. It was absolutely a factor in the way syntax works in my personal language (which I use for almost everything in my day to day). I really do agree that learning a lisp can truly expand the way in which a dev views and thinks about code.
Brown university PLT labs also vouched for this approache. Their textbook starts with a descriptions of sexps as syntax saying that's the only parsing theory covered here.
The negative reactions that a lot of people have toward Lisp's parentheses are not because of the call function syntax but because parentheses are used everywhere else in Lisp's syntax.
If your standard function call convention is just `f x`, but you also support precedence operators, then `f (x)` automatically becomes possible.
It reminds me of an old Lisp joke, though: