Hacker News new | past | comments | ask | show | jobs | submit login

Cool site, I've never tried Clojure before. My first reaction though is that (+ 1 1) is a highly unconventional and possibly confusing way of writing 1+1, and I'm not sure why that design choice came about.



Using the same notation for function calls and math operators has many benefits that may not be immediately obvious, and which may (depending on preference) outweigh the unfamiliarity. For example:

No operator precedence rules to remember.

Use math operators in higher order functions:

    (reduce [1 2 3] +)
    (compose + \*)
Simpler parser in the language.

Use kebab-case variable names.

Use most symbols in variable names.

The last two are because of reduced ambiguity in the language. The benefits for readability can be huge. For example, I love question marks for predicates in Scheme.

    (vector? x)
    (filter [1 2 3 4 5] odd?)
I'm sure there are more benefits that I haven't thought of.


> Use math operators in higher order functions:

To be fair, there just needs to be syntax for passing infix functions as a "normal" function, like parens in Haskell

  (reduce [1 2 3] +)
  foldl' (+) 0 [1, 2, 3]

And to allow `?` in names, it "just" shall not be used elsewhere.


In prefix notation (which has been around for a very long time) the operation comes first. Think of it as the "addition" function being applied to the arguments following it. This matches how nearly all programming languages are broken down into their abstract syntax trees while compiling.

As an added bonus, you now know Lisp. This is how all Lisp syntax works: `(function arg1 arg2 ...)`. That's literally it.


[flagged]


"Typical convention" is also an Appeal to Tradition. Different traditions though!

Incidentally, people used to be /really/ against Python because of its use of significant whitespace. The number of people who bounced off it because they would try to type something and then get a mysterious syntax error that turned out to be that they didn't type space or tab the right amount of times! It used to be what people would immediately post whenever anyone said anything about Python.

I think now people get introduced to IDEs at the same time as Python, or something happens at that early teaching moment that gets them over that hump. The same, in theory at least, happens when you play with a lisp long enough.


I still, to this day, think that significant whitespace in Python is an awful design choice. I avoid it for that reason.


Calling it unreadable is a stretch. It’s just using the same syntax for all functions, instead of having something special for arithmetic.


Prefix notation isn't unconventional and it has many benefits.

The main one being if you want to add many many items together it's much easier IE (+ 1 2 3 4 5 6).

Instead of doing an operation between two numbers you are apply a function to a list of inputs. In the "+" function's case you are adding them together. With this in mind, it's no different than most other programming languages. You would never have your function be in the middle of your arguments/inputs.

In python its Foo(bar, bar) and not bar Foo() bar, which obviously doesn't make sense.


On the contrary - I think it's much easier!

The real advantage of this notation is that it's much, much easier to stack calculations.

Example:

(/ (+ 34 68 12 9 20) 140)

You can imagine how the first part of that could come about:

(+ 34 68 12 9 20)

And then the second part (pseudocode):

(/ sum 140)

In Clojure it's easy to mash them together, or for example to paste the first calculation in the second.

(/ (+ 34 68 12 9 20) 140)

Want to go further? Easy: add another enclosing parens.

(* (/ (+ 34 68 12 9 20) 140) 1.5 2)

Note how we're stacking in a more human-readable order - a new calculation starts on the left hand side, the first thing we see as we read it.

Compare how verbose the alternative is:

((34 + 68 + 12 + 9 + 20) / 140) * 1.5 * 2


I'd take the verbosity over having to count the operators from the start of the line to figure out which one are used at the end of the line "1.5 2"


The parenthesis highlight in any decent editor. So it always very obvious which operators apply to the operands.


This reduces the friction, but doesn't eliminate the needles back and forth. Breaking context locality is just bad

The threads example in another comment is way better


There’s no needle back and forth. Everything in parentheses is a complete expression, like a formula. You have the operator (function) and the operands (args). Think of it as add instead of + and multiply instead of *. You get used to it very quicly and then it does not matter.


It's not about thinking differently, it's about operator location, you simply don't know with long nested function what operation is performed at end-3 nest unless you look at beg+3


> (* (/ (+ 34 68 12 9 20) 140) 1.5 2)

The problem with that is by the time I reach `20)` I have already forgotten what operation I'm in. I'd write it more like this:

    (* 
      (/ (+ 34 68 12 9 20) 
         140) 
      1.5 2)
actually I'd write this as

    (/ 
       (* 2 (/ 3 2) (+ 34 68 12 9 20))
       140)
> ((34 + 68 + 12 + 9 + 20) / 140) * 1.5 * 2

Why not (34 + 68 + 12 + 9 + 20) * 1.5 * 2 / 140?


A programming language should be first and foremost precise and readable, not concise. I can barely understand the clojure version but when I read the "traditional" one I don't even need to think.

If you work with clojure a lot, does it become natural?


I've worked with clojure for years, and it becomes pretty normal, but I don't think it inherently get's better than the alternative, just on par... although it's realllly nice not to ever have to think about operator precedence, especially if you have to switch between languages.

Where it really shines though is when you use the threading macro and inline comments to make really complex math a breeze to review:

  (-> (+ 34 68 12 9 20) ;; sum all the numbers
      (/ 140)           ;; divide by a quotient for some reason
      (* 2)             ;; multiply by two for it's own reason


Yes, especially because the language is so consistent.

99% of the syntax is just (foo arg1 arg2).


I would say yes, as someone who got used to it a few years ago.


Use the thread macro to make this easier to think about:

(-> (+ 34 68 12 9 20) (/ 140))


It resembles command language: + is the command, and 1 and 1 are arguments.

  mv foo.txt bar.txt  # rename a file in the Unix shell
Except there are parentheses to delimit the command because commands can be nested in each other.

(Some Lisps have interactive modes where you can drop the outermost parentheses, allowing you to type like this:

  prompt> + 1 1
  2
but usually the parentheses are part of the formal syntax; where this is just a hack in the interactive listener.)

The design choice came about because the syntax started as an internal representation for a symbolic processing (formula manipulation) system that was being designed by John MacCarthy. The internal representation where the operator is first followed by its arguments was convenient because you don't have to parse around to identify them. You know immediately that by accessing the first position of the formula, there is going to be a symbol there which identifies its operator.

The internal form, and its written notation came to be used directly, while the symbol manipulation system came to be programmed in itself, so that its own "formulas" (source code) ended up in that form.


Lisp is one of the oldest programming languages developed in the 1960, it came before and is very elegant actually. You can write incredibly powerful things with this pattern and it's also really simple to build an interpreter for it with the stuff they had at their disposal at the time.


See also: 'Recursive Functions of Symbolic Expressions and Their Computation by Machine' (1960) [1]

[1] https://www-formal.stanford.edu/jmc/recursive.pdf


When McCarthy was first working on Lisp he intended to migrate to an infix syntax (called m-expressions), but abandoned the effort because everyone came to really like the current syntax (s-expressions) after they got used to it.

I know it's really hard to see how. (And honestly it's sometimes a little hard for me to see what the big deal is, but one of my first programming languages was a dialect of lisp so I can barely even remember not being comfortable with them.) But many people find that, once they get used to them, and especially if they learn to use an editor with a paredit mode, they can start to feel even easier to work with than infix syntax.


Basically you are writing in Trees more on that here https://en.wikipedia.org/wiki/Polish_notation


+ is a function, there are no operators in Clojure.

That allows for some nifty tricks like:

(reduce + [1 2 3]) ; returns 6


Another neat thing is that these functions are n-ary, and that includes 0 args. So (+) => 0 (additive identity) and (*) => 1 (multiplicative identity). Little things like this are missing from languages like Python and make functional programming worse than it should be.


Oh wow, this is truly great! Would you have a link to the code behind `(+)` and `(*)` ?

(In my functional programming languages, `reduce` also requires passing an initial value, like `foldl (+) 0 [1, 2, 3]`. But surely it should get a monoid instead of two separate arguments for the operation and the starting value!)



Of course `+` and `*` are special, but defining such functions yourself is possible: https://clojure.org/guides/learn/functions#_multi_arity_func... https://clojure.org/guides/learn/functions#_variadic_functio...


Yeah, it always annoys me to have to pass an initial argument. The idea of identities is built into Lisps and seems conspicuously missing from other languages and I don't know why. In Python and JS people have to write stuff like `lambda x: x` over and over again. The identity function is surely special enough to get its own name.


This does not involve an identity function?

It's just a textbook monoid: an operation on two elements which produces another (+ or *), with a "neutral" element provided (0 or 1).


I know this isn't the identity function, but it's the same idea and a feeling I get when using Lisps that the language is kind of "complete". Including the identity function and having built in 0-ary functions where it makes sense are examples of this.


Your comment is a highly unconventional and possibly confusing way of writing... to a Chinese person ;)


Because (+ 1 2 3) makes 6. This said, doing any kind of maths in clj, especially division and comparisons, is a PITA - I’d go as far as to say that’s the worst part of the language. Otoh, (< 3 x 4 y) is pretty cool.


That’s prefix notation [0] and not unique to clojure.

[0]: https://en.m.wikipedia.org/wiki/Polish_notation



Try 1 + 2 + 3 + 4 + 5 versus (+ 1 2 3 4 5) for one nice reason.


The way to think about it is that you're processing a list, where the first argument is an operation, and the subsequent arguments are numbers to add. Clojure, and other Lisp based languages use this structure (a list) for representing everything - both data and functions. It turns out to be a very simple yet endlessly flexible and extensible concept.


because lisp and consistency

(some-function-name arg1 arg2)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: