Hacker News new | past | comments | ask | show | jobs | submit login
Coconut – Pythonic functional programming language (coconut-lang.org)
259 points by jonbaer on June 23, 2016 | hide | past | favorite | 130 comments



This is cool, but you can go pretty far w/ FP idioms in Python without requiring a DSL/transpiler:

- Toolz (http://toolz.readthedocs.io/en/latest/) provides the same primitives from Clojure (including partial functions, compose, juxt, etc)

- Pysistence (https://pypi.python.org/pypi/pysistence/) provides immutable data structs

- For parallel map you can use https://pypi.python.org/pypi/python-pmap/1.0.2

- TCO is achievable w/ a decorator regardless of AST manipulation (like Clojure's `loop/recur` construct)

The only thing left is pattern matching and more powerful destructuring (Python already has limited support for it), I guess.


No, you can't, because lambdas.


I know (and bemoan) that lambdas can only have a single expression returned, but can't you just use a named function? (Not as nice, I agree, but doesn't put a hard limit on FP in python.)


And manually closure-convert everything too?


Python supports nested functions, so closure conversion isn't necessary.


Could you elaborate on that?


Python lambda is somewhat limited, compared to lambda in some other languages. Guido isn't overly fond of lambda or other functional building blocks (and even planned to remove it from Python, before there was a sufficient outpouring of support for keeping it). http://www.artima.com/weblogs/viewpost.jsp?thread=98196

As I understand it, the primary limitation is that Python lambda only works with a single expression (I think it also has some weirdness regarding scope when calling functions?). I haven't poked seriously at Python since around about the time they were talking about removing it, so I probably have no idea what I'm talking about.

The idea is that generators and list comprehensions do the things most people do with lambdas in a more Pythonic way.


I think the scoping issues you're referring to were resolved a while ago (PEP 227) -- nowadays lambda scope rules are the same as named functions.


Python lambdas are explicitly artificially limited to a single expression. Python has statements which are not expressions, mostly control statements and looping constructs.


The syntax isn't the greatest, but you can get pretty far with Python's ternary expression. Generators in my experience remove the need for most loops. Where lambdas feel really subpar for me is the lack of 'let' and error-handling.


While it IS a conscious decision to force lambdas to be a single expression, extending it to be able to be multiline would screw up with indentation being irrelevant inside of a function call too.


> The only thing left is pattern matching and more powerful destructuring (Python already has limited support for it), I guess.

Coconut does both of those!


Pattern matching

Is it mimics the Erlang behaviour?


Reminds me a lot of Elixir actually. I'd really prefer a purely functional language with LISP syntax. Does anyone know of any? I find it easier to understand than Haskell..


There's also Shen: http://shenlanguage.org/


Is Clojure close enough? Or perhaps LFE (Lisp-flavored Erlang)?


> ...joining the over 30,000 people already using Coconut...

This triggered my BS detector.

It turns out 30,000 is the sum of all downloads on PyPI since the very first version was released. But there are only 534 downloads on average for each release since version 0.2.2, excluding the blip for 0.3.2. So there is no way there are on the order of 30k users when downloads are roughly 1.8% of that for each release.

This matters because exaggerated claims reduce trustworthiness in a project and are off-putting to potential users.


I should have been more careful with the wording and specified downloads instead of people. I also assumed the statistic was accurate and bots weren't an issue, which is probably wrong. Regardless, it's been removed.


Did they find a way to remove bots from the download statistics? My very obscure image library had over 2 million downloads despite an estimated 10 to 20 users, which would make 30,000 effectively equal zero.


Since they just before that wrote that all valid Python is valid Coconut, I thought that it would be a joke about every Python user being a Coconut user.


That's actually how I read it until I saw the comment a above and reread it


Are you saying there are only around 30k Python users? :)


Nah, when I got to the number, I realised that it wasn't going to be a joke like I thought.


While I resonate with the sentiment, I just wish Python would add better syntax for functional programming. Having written a lot of JavaScript lately, I wish Python's built-in functional tools supported something cleaner, kinda like Underscore/Lodash.


Guido is against it, so I doubt it will ever happen.

https://news.ycombinator.com/item?id=9973301


Good good. One more reason to pry 2.7 out of Guido's hands and into 2.8.


Send to me something like Coconut which implements that in a way which accepts existing Python but adds cleaner functional syntax is one way of making the case for that in future Python.


Specifically?


Not really a fan of the lambda syntax in Python. Comparison:

JavaScript:

  let arr = [1, 2, 3]
  let sumOfSquares = arr.map(n => n * n).reduce((a, b) => a + b) // 14
Python:

  arr = [1, 2, 3]
  sum_of_squares = reduce(lambda a, b: a + b, map(lambda n: n * n, arr)) # 14


Is there a more Pythonic way to do it? Lambdas are cool but usually not the first place you go in Python. I would think something like (my best guess, not a Python pro)...

    sum_of_squares = sum([x*x for x in arr])
Which I think is easier to read than either example post above.

Of course you will point out that this is less powerful than full map and reduce.. but meh... pros and cons to both styles


I would write it like this, to avoid constructing the immediate list:

    sum_of_squares = sum(x*x for x in arr)
This makes use of https://www.python.org/dev/peps/pep-0289/


Thanks for the link, this is good to know.


Worth noting that map() can be parallelized whereas a list comprehension can't necessarily (since it is an explicit loop). The multiprocessing module allows trivial map parallelization, but can't work on list comprehensions.

It's more than just stylistic.


So I have coded everything from dumb web servers (tm), to high performance trading engines (tm). I have toyed with doing the list in parallel thing... and used it in a toy GUI tool or two I wrote... but never really found it that useful in the real world. If you actually want high performance, doing a parallel map is not going to be fast enough. If you are a dumb web server, it's a waste of overhead 99% of the time.

But hey, if you want to use map when you actually need to do a parallel map, cool. But seems very very uncommon. ~ 1 in 10,000 maps I write.


I don't think this is the case, list comprehensions can be expressed as syntax sugar over list functions, it's how they work in Scala for example

http://docs.scala-lang.org/tutorials/FAQ/yield.html


map() can only be parallelized if the function has no side effects. If there are no side effects, list comprehensions can be parallelized just as well


That example works only because the function sum is already defined in Python. If you wanted to do something less common than summing up elements you would have to either use reduce or implement a for loop.


In Python 3, reduce was intentionally moved into the functools library because it was argued that its two biggest use cases by far were sum and product, which were both added as builtins. In my experience, this has very much been the case. Reduce is still there if you need it, and isn't any more verbose. The only thing that is a little bit more gross about this example is the lambda syntax; I would argue that even that is a moot point, however, since Python supports first-class functions, so you can always just write your complicated reduce function out and then plug it in.


True, but I've used python a lot, and I've used reduce maybe...twice? (well, twice that I can find on my github at least)


I just counted the number of reduce I used in my current python project (6k lines). reduce comes up 32 times. And by comparison, map is used 159 times and filter 125 times - for some reason I tend to use list comprehensions less than I should.


I also thing you use map not a ton if you get the list comprehension syntax down... it is a map with less cruft mostly...


curious how often reduce is used with something else than operator.add?


One occurence was to calculate the GCD of a list of polynomials.

In fact I had "reduce" appearing in the names of some of my variables so I used it less than 32 times, about 20 times in that project.


I see nothing particularly inspiring in the examples posted on http://stackoverflow.com/questions/15995/useful-code-which-u...

Could you show your reduce calls?


They are very similar to this one from your link: http://stackoverflow.com/questions/15995/useful-code-which-u...


Well, or write a static method somewhere that you call. Sum is used a lot, so handy it is written somewhere (vs having to do a lambda x + x thing.


That seems like an argument against lambda functions in general - why use lambdas when you can define a static function for every case? Well, the answer in my opinion is because it makes code more readable if you can define a simple lambda function instead of having to name every single function in the code base.


Well if you are going to reuse the function, name it. If it is a 1 time thing, use a lambda.


Sounds great in theory. Problem is if you need a lambda that isn't a single expression, you then have to name it. Welcome to the conversation.


What's the advantage of list comprehension over lambdas (assuming the lambda syntax is decently lightweight)?

I feel like I come down hard on the side of lambdas, but I've never really spent enough time in a language with list comprehension, so there's a good chance I'm missing something.


how can you come down hard on the side of one when you've never experienced the other?

I'm from a non-list-comprehension background too, but recently started working a lot in a large python codebase, and have found the dict/list comprehensions to be beautiful. I'm a huge fan. It's a shame lambda syntax is not the best and it's generally crippled, but comprehensions are a great 80/20 compromise for handling most cases very cleanly.


I find it a lot easier to read, part of which is that I'm used to the Scala way of sequence dot map function. When I see the python one I can't remember if the function comes first or the array.


I'm not positive, but I think it saves the need to create a new execution frame for each lambda call, since the whole loop executes in single frame used by the comprehension.

In theory I suppose the VM could have a map() implementation which opportunistically extracts the code from a lambda and inlines them when possible; but doubt CPython does that. OTOH, I'd be surprised if PyPy doesn't do something like that.


Since Python 3, both generators and lists create a new stack frame. [1] (2nd to last paragraph)

[1] http://python-history.blogspot.com/2010/06/from-list-compreh...


I'm not meaning when the comprehension is invoked, but during each iteration of the loop within the comprehension.

When doing something like `map(lambda x: 2+x, range(100))`, there will be 101 frames created: the outer frame, and 100 for each invocation of the lambda.

Whereas `[2+x for x in range(100)]` will only create 2: one for the outer frame, and one for the comprehension.


I think it's just less to type really, and it's considered the more standard way to do it.


For simple mathematical operations you can import them as functions:

    from operator import mul, add
    arr = [1, 2, 3]
    sum_of_squares = reduce(add, map(mul, arr, arr))


It's even more concise in Clojure:

    (defn sum-of-squares [a] (reduce + (map #(* % %) a)))
    (sum-of-squares [1, 2, 3]) ; => 14


I think lambda syntax can be a bit cumbersome, but that aside what I really miss is a clean syntax for chaining functional operations. So often I find myself thinking about data in terms of 'pipelines'. i.e. in JS:

  _.chain(values)
   .map(() => {})
   .flatten()
   .compact()
   .uniq()
   .value()
vs Python where doing the same thing becomes either a nested mess of function calls or comprehensions or a for loop.


But that's a function of API, not the language itself. Django (and most ORMs, I believe) support that kind of behavior:

    MyTable.objects.
        filter(some_row__gt=5).
        exlude(other_row='q').
        order_by('other_row')
The Python iterable APIs have decided to use nesting rather than chaining, but you can still have an underscore-like API: https://github.com/serkanyersen/underscore.py

The bigger problem remains: lambda functions are hideous in Python. map() will forever be ugly if you try to use it in the same way it is used in most functional languages.


This sort of API is hard to implement in Python though, because there's no formal notion of interfaces, so you cannot extend all iterables generically. So you need to use free functions (which don't read well when chained) or a wrapper object (ick).


Elixir and F# have my favorite syntax for that:

    values |> map(&({})) |> flatten |> compact |> uniq
Although, the closure syntax is a little clunky before you get used to it.


I've been thinking that it might be nice to use chaining (though I didn't know it had a name) in ordinary mathematical notation too, writing "x f g" instead of "g(f(x))".


You don't like using pytoolz? Pseudocode:

    result = pytoolz.pipe(values, map, flatten, compact, uniq, value)
    # or
    func = pytoolz.compose(value, uniq, compact, flatten, map)
    results = func(values)


Looks interesting. How do you tell map what function to use?


You probably need `compose(foo, partial(map, fn), bar)`.


Yeah, you do this or use curried version:

    pytoolz.curried.map(fn)


I think a big thing for many is the lambda: syntax, as well as the lack of full anonymous functions.


The latter can't really happen given Python is a statements- and indentations-based language. You'd need some really weird meta-magical syntax which really isn't going to happen in Python. Although you can cheat by fucking around with decorators e.g.

    def postfix(fn, *args):
        return lambda arg: fn(arg, *args)

    @postfix(map, range(5))
    def result(i):
        return i * i

    print result
    # [0, 1, 4, 9, 16]
(`postfix` is necessary because `map` takes its argument positionally so it's not possible to pass in the sequence with functools.partial)


> The latter can't really happen given Python is a statements- and indentations-based language.

Yeah, though I suppose you could hack around that and get nearly-full functionality in lambdas if you built a library that either wrapped non-expression statements in functions or provided equivalent functions. There are obviously some statements that there aren't good solutions for in that direction.

OTOH, using named functions is in many cases more readable -- in the context of what is otherwise a normal Python codebase -- than the kind of lambdas that you can't easily write in Python. But I like the Coconut approach of but providing a more concise syntax for the kind of lambdas Python already supports.


I agree that typing out the word lambda is annoying, but you can use them as fully anonymous functions.


> you can use them as fully anonymous functions.

A lambda can only contain a single expression, by "full anonymous function" I'm guessing hexane360 means multiple statements. You can't put a for loop or a context manager in a lambda for instance.


You can nest lambdas to get the equivalent of several expressions. I once wrote a Runge-Kutta example on Rosetta Code showing this:

http://rosettacode.org/wiki/Runge-Kutta_method#using_lambda

It does not look as bad as one might expect, though the nesting of parenthesis makes things messy.


You can but you still can't get statements in there.


You can hack together multiple expressions chaining them with and http://sigusr2.net/one-line-echo-server-using-let-python.htm...


That still doesn't get you statements.

You can't get context managers or exception handling (although you can raise exceptions) into lambdas, I've tried.


Well you might be able to if you add a bunch of named function combinators wrapping these, but definitely not with only lambdas, unless you define your combinators using `ast`, which I think would let you define statements via expressions.


That's an awful lot of effort to go to to avoid naming a function.


Well sure, you could do something like

    def apply_ctx(ctx, func, *func_args, **func_kwargs):
        with ctx as __ctx:
            func(*func_args, **func_kwargs, ctx=__ctx)
but you can't do that itself as a lambda. And I consider modifying the ast cheating :P


> but you can't do that itself as a lambda. And I consider modifying the ast cheating :P

No disagreement, really depends whether you're a "rules" or "spirit" kind of person though.


> Supports pipeline-style programming

>

> "hello, world!" |> print

Please follow the Elixir convention and call this by its proper name: illuminati pyramid operator.


This is an impressive effort, but one major problem is that the Python runtime lacks efficient support for tail calls. I see that there is an annotation for self-recursion, but this is only a special case. Scala has a similar problem with the JVM. Still, it does permit a nice functional syntax above perhaps lower-level imperative code.


That is a problem but it's not like Python is "efficient" or performant in other areas that don't call an underlying C library.


Your function in the tail recursion optimization exemple is actually not tail recursive. Factorial needs an accumulator parameter to get that tail call.


Maybe the decorator rewrites the AST. I've seen this before.


It does, when the function is actually tco optimizable. The example is not. On the docs however it's correct http://coconut.readthedocs.io/en/master/DOCS.html#recursive


Good catch! I'll fix it right away.


The example algebraic data type isn't an algebraic data type at all. It's a single type with a single type constructor. How is this different from a regular class?


It looks like data declarations get translated to immutable subclasses of collections.namedtuple in such a way that you can match on the resulting values. So it's really less like an algebraic data type and more like one of Scala's case classes.


When I used namedtuple that way it tended to blow up on me, because namedtuples implement 'dunder' methods to act as sequences, which is not what I expect for some random algebraic datatype. This makes bugs manifest as very head-scratching behavior. Nowadays I use my own struct type builder with only the equivalent of 'derives Show' and such.


Interesting. Looks more polished than mochi[1] and hask[2].

[1] https://github.com/i2y/mochi

[2] https://github.com/billpmurphy/hask


Well, so far nobody's complained about Python 3 support yet :)

Seriously now, this looks pretty neat, and even includes Jupyter support.

I suspect it manipulates the AST like Hy, and I can't see any reason why it shouldn't work on Pypy.


It works with python3 just fine. The 'coconut-lang' and 'coconut-lang-git' packages on the AUR both use py3.


  $ coconut
  Coconut Interpreter:
  (type "exit()" or press Ctrl-D to end)
  >>> match [head] + tail in [0, 1, 2, 3]:
      print(head, tail)
      
  CoconutParseError: parsing failed (line 2)
      print(head, tail)


What are all the $ for? They seem ugly..


Quoting from the tutorial:

    Second, the partial application. Think of partial application
    as lazy function calling, and $ as the lazy-ify operator,
    where lazy just means “don’t evaluate this until you need to”.
    In Coconut, if a function call is prefixed by a $, like in this
    example, instead of actually performing the function call, a
    new function is returned with the given arguments already 
    provided to it, so that when it is then called, it will be
    called with both the partially-applied arguments and the new
    arguments, in that order. In this case, reduce$((*))
    is equivalent to (*args, **kwargs) -> reduce((*), *args, **kwargs).
https://coconut.readthedocs.io/en/master/HELP.html (about a third of the way down, just search for "$" and look near the first occurrences)

I agree though, it makes the code look very... noisy.


I'm all for syntactic sugar, but my experience with teaching programming is that Python is easy to pick up because of the lack of weird operators. Decorators are an obvious example where many struggle; and this looks like another.

It will take someone cleverer than me to come up with an alternative solutions; I suspect they chose "$" because it isn't an operator in Python and you have to do a lot of partial application, so the logic was probably "short is better". But having something more intuitive would be great (more pythonic?).


Python has lots of weird operators.

List/dict comprehension, list slicing, * and in arguments. Comma for tuples (which make sense to me, but add a lot of magic, especially since, in my experience, beginners often assume that the parentheses are what make tuples). There's many more examples, but those are the ones that immediately spring to mind.

Reading actual production Python code is not any easier than many other languages and certainly far from "executable pseudocode"


That's fair - * args and * * kwargs does feel like a hack, although denoting "zero or more" as * has a bit of tradition. I'd have to disagree with list/dict comprehension, pretty natural considering e.g Javascript/JSON also use [] / {} for each, respectively (Obviously completely subjective).

I also get that there's only so many symbols to pick from. Would have been cool if it was *, which is already used for arguments.

I think my larger point is that even the small decisions when making a language matter. There isn't really right/wrong or better/worse, but IMO Python operators feel more cohesive/uniform than Ruby or Perl. Also, describing how operator usage feels is hard.

Edit: Asterisks causing italics.


Looks like the $ is necessary for partial function application


How does this compare with Hy?


Hy doesn't push FP idioms AFAIK. It's a clojury in clothing lisp.


Oh, I should read more :) thanks!


I'd suggest fixing the front page, which claims ADT support which seems untrue, and the tail recursive example isn't tail recursive at all. Errors like that make this too easy to dismiss.


Have you ever looked at Perl 6, by any chance? We have most of the features you list, except for instance Tail Call Optimization, but we'd love to have someone implementing it!


I think the purpose of coconut is not to give the world new functional programming features, but give you "real" functional programming with access to pythons vast library.


Is there a Pythonic declarative programming language?


Yep! https://sites.google.com/site/pydatalog/

(Technically it's not a separate programming language, rather a lot of Python magic.)



Reminds me Ocaml, a nice language that deserves more exposure on HN.


Is coconut strongly typed? Is it closer to Haskell or to Lisp / ML?


You probably mean statically typed.


Coconut is a superset of Python, and Python is dynamically typed, so Coconut also has to be.


Rise of the LISP.


What's "pythonic" exactly?



Doesn't really help in this context. What's the difference between "pythonic" and "idiomatic"?

Does Coconut being pythonic mean it's written in idiomatic Python (then why is it a different language?) or does it mean it's written in idiomatic Coconut?

I don't think they thought this through.


They just mean "this language looks in general like Python." You're overthinking it.

It's like saying that JavaScript looks C-ish or Java-ish. It definitely doesn't look exactly the same; if you see e.g.

    var result = [].slice.apply(vals, 0);
then you can be highly certain that that's JS and not Java or C. But the idea of having blocks of code enclosed in curly braces, which are formed out of statements usually delimited by semicolons (except for special forms like if-statements and for-loops which don't need to be followed with a terminal semicolon), etc. is very much a C-style thing.


Pythonic is just short for idiomatic Python. It's a different language presumably because it introduces some new language constructs that are not compatible with Python. Perhaps they're saying it's 'Pythonic' because of some loosely defined notion that the new constructs feel and behave in the same way that Python does, but that's just speculation.


Python has a very strong set of conventions, beliefs about what 'good python' looks like/what is in the spirit of the language. It's a common term in the python community.


Hipsterbabble.


Name issues. E.g. Google: Coconut-Lang cookbook ...


> There should be one-- and preferably only one --obvious way to do it.

https://www.python.org/dev/peps/pep-0020/


I've mentioned this before and I'm still curious to hear more.

> Barry Warsaw, one of the core Python developers, once said that it frustrated him that "The Zen of Python" (PEP 20) is used as a style guide for Python code, since it was originally written as a poem about Python's internal design. [0]

I did a quick search for more background but didn't find anything.

[0]: https://github.com/amontalenti/elements-of-python-style#a-li...


Over the years I have come to dislike that phrase more and more. I think it hindered Python from growing.

Also Python 2 and Python 3 gives me at least two was to print :)


I've never understood why that phrase applies to Python when the language still has the map, filter, reduce, etc. functional functions. Generators and comprehensions completely removed the need for those, yet the functions remain even in Python 3.

On the subject of printing, there are actual reasons for having this duplicate functionality. The main way to print is of course the print function in three, or print statement in two. The print function was lacking the ability to flush the buffer before that was introduced in three. You would have to call sys.stdout.flush(), whereas now the print function includes the boolean "flush" keyword argument. The second way to print is to call sys.stdout.write(), which does not automatically append a newline. That functionality was implemented in the print function with the "end" keyword argument. You can pass it an empty string in place of the newline.

So for most use cases, print() is just fine. Sometimes you want finer grained control, and for that you would use the sys module.


Python 3 at least changes the semantics of map and filter so that they are generators.

Lazy sequences are the sort of thing that you don't care about until you need to process a ten-million-line file (or whatever) and suddenly find that your program is slowing down for pointless memory allocations up-front -- then they become unbelievably important.


Yes, you are absolutely correct. In our production code, which is currently running 2.7, we make extensive use of the itertools versions of those functions (imap, ifilter, etc.) for this reason. The itertools functions behave exactly like the Python 3 builtins, as iterators. The memory footprint is minimal, and they are just overall faster.


> Generators and comprehensions completely removed the need for those, yet the functions remain even in Python 3.

Hmm, I had never thought about it like that. Are there any articles, etc. you recommend about the differences or the advantages of one method over the other? It'd be nice to look into this more.


I don't think you'll find one. Guido wanted to remove them [1], but people wanted them to stay for some reason.

There is no difference between `map(func, values)` and `[func(x) for x in values]`, except for character count.

[1] http://www.artima.com/weblogs/viewpost.jsp?thread=98196




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: