I know (and bemoan) that lambdas can only have a single expression returned, but can't you just use a named function? (Not as nice, I agree, but doesn't put a hard limit on FP in python.)
Python lambda is somewhat limited, compared to lambda in some other languages. Guido isn't overly fond of lambda or other functional building blocks (and even planned to remove it from Python, before there was a sufficient outpouring of support for keeping it). http://www.artima.com/weblogs/viewpost.jsp?thread=98196
As I understand it, the primary limitation is that Python lambda only works with a single expression (I think it also has some weirdness regarding scope when calling functions?). I haven't poked seriously at Python since around about the time they were talking about removing it, so I probably have no idea what I'm talking about.
The idea is that generators and list comprehensions do the things most people do with lambdas in a more Pythonic way.
Python lambdas are explicitly artificially limited to a single expression. Python has statements which are not expressions, mostly control statements and looping constructs.
The syntax isn't the greatest, but you can get pretty far with Python's ternary expression. Generators in my experience remove the need for most loops. Where lambdas feel really subpar for me is the lack of 'let' and error-handling.
While it IS a conscious decision to force lambdas to be a single expression, extending it to be able to be multiline would screw up with indentation being irrelevant inside of a function call too.
Reminds me a lot of Elixir actually. I'd really prefer a purely functional language with LISP syntax. Does anyone know of any? I find it easier to understand than Haskell..
> ...joining the over 30,000 people already using Coconut...
This triggered my BS detector.
It turns out 30,000 is the sum of all downloads on PyPI since the very first version was released. But there are only 534 downloads on average for each release since version 0.2.2, excluding the blip for 0.3.2. So there is no way there are on the order of 30k users when downloads are roughly 1.8% of that for each release.
This matters because exaggerated claims reduce trustworthiness in a project and are off-putting to potential users.
I should have been more careful with the wording and specified downloads instead of people. I also assumed the statistic was accurate and bots weren't an issue, which is probably wrong. Regardless, it's been removed.
Did they find a way to remove bots from the download statistics? My very obscure image library had over 2 million downloads despite an estimated 10 to 20 users, which would make 30,000 effectively equal zero.
Since they just before that wrote that all valid Python is valid Coconut, I thought that it would be a joke about every Python user being a Coconut user.
While I resonate with the sentiment, I just wish Python would add better syntax for functional programming. Having written a lot of JavaScript lately, I wish Python's built-in functional tools supported something cleaner, kinda like Underscore/Lodash.
Send to me something like Coconut which implements that in a way which accepts existing Python but adds cleaner functional syntax is one way of making the case for that in future Python.
Is there a more Pythonic way to do it? Lambdas are cool but usually not the first place you go in Python. I would think something like (my best guess, not a Python pro)...
sum_of_squares = sum([x*x for x in arr])
Which I think is easier to read than either example post above.
Of course you will point out that this is less powerful than full map and reduce.. but meh... pros and cons to both styles
Worth noting that map() can be parallelized whereas a list comprehension can't necessarily (since it is an explicit loop). The multiprocessing module allows trivial map parallelization, but can't work on list comprehensions.
So I have coded everything from dumb web servers (tm), to high performance trading engines (tm). I have toyed with doing the list in parallel thing... and used it in a toy GUI tool or two I wrote... but never really found it that useful in the real world. If you actually want high performance, doing a parallel map is not going to be fast enough. If you are a dumb web server, it's a waste of overhead 99% of the time.
But hey, if you want to use map when you actually need to do a parallel map, cool. But seems very very uncommon. ~ 1 in 10,000 maps I write.
That example works only because the function sum is already defined in Python. If you wanted to do something less common than summing up elements you would have to either use reduce or implement a for loop.
In Python 3, reduce was intentionally moved into the functools library because it was argued that its two biggest use cases by far were sum and product, which were both added as builtins. In my experience, this has very much been the case. Reduce is still there if you need it, and isn't any more verbose. The only thing that is a little bit more gross about this example is the lambda syntax; I would argue that even that is a moot point, however, since Python supports first-class functions, so you can always just write your complicated reduce function out and then plug it in.
I just counted the number of reduce I used in my current python project (6k lines). reduce comes up 32 times. And by comparison, map is used 159 times and filter 125 times - for some reason I tend to use list comprehensions less than I should.
That seems like an argument against lambda functions in general - why use lambdas when you can define a static function for every case? Well, the answer in my opinion is because it makes code more readable if you can define a simple lambda function instead of having to name every single function in the code base.
What's the advantage of list comprehension over lambdas (assuming the lambda syntax is decently lightweight)?
I feel like I come down hard on the side of lambdas, but I've never really spent enough time in a language with list comprehension, so there's a good chance I'm missing something.
how can you come down hard on the side of one when you've never experienced the other?
I'm from a non-list-comprehension background too, but recently started working a lot in a large python codebase, and have found the dict/list comprehensions to be beautiful. I'm a huge fan. It's a shame lambda syntax is not the best and it's generally crippled, but comprehensions are a great 80/20 compromise for handling most cases very cleanly.
I find it a lot easier to read, part of which is that I'm used to the Scala way of sequence dot map function. When I see the python one I can't remember if the function comes first or the array.
I'm not positive, but I think it saves the need to create a new execution frame for each lambda call, since the whole loop executes in single frame used by the comprehension.
In theory I suppose the VM could have a map() implementation which opportunistically extracts the code from a lambda and inlines them when possible; but doubt CPython does that. OTOH, I'd be surprised if PyPy doesn't do something like that.
I'm not meaning when the comprehension is invoked, but during each iteration of the loop within the comprehension.
When doing something like `map(lambda x: 2+x, range(100))`, there will be 101 frames created: the outer frame, and 100 for each invocation of the lambda.
Whereas `[2+x for x in range(100)]` will only create 2: one for the outer frame, and one for the comprehension.
I think lambda syntax can be a bit cumbersome, but that aside what I really miss is a clean syntax for chaining functional operations. So often I find myself thinking about data in terms of 'pipelines'. i.e. in JS:
The bigger problem remains: lambda functions are hideous in Python. map() will forever be ugly if you try to use it in the same way it is used in most functional languages.
This sort of API is hard to implement in Python though, because there's no formal notion of interfaces, so you cannot extend all iterables generically. So you need to use free functions (which don't read well when chained) or a wrapper object (ick).
I've been thinking that it might be nice to use chaining (though I didn't know it had a name) in ordinary mathematical notation too, writing "x f g" instead of "g(f(x))".
The latter can't really happen given Python is a statements- and indentations-based language. You'd need some really weird meta-magical syntax which really isn't going to happen in Python. Although you can cheat by fucking around with decorators e.g.
def postfix(fn, *args):
return lambda arg: fn(arg, *args)
@postfix(map, range(5))
def result(i):
return i * i
print result
# [0, 1, 4, 9, 16]
(`postfix` is necessary because `map` takes its argument positionally so it's not possible to pass in the sequence with functools.partial)
> The latter can't really happen given Python is a statements- and indentations-based language.
Yeah, though I suppose you could hack around that and get nearly-full functionality in lambdas if you built a library that either wrapped non-expression statements in functions or provided equivalent functions. There are obviously some statements that there aren't good solutions for in that direction.
OTOH, using named functions is in many cases more readable -- in the context of what is otherwise a normal Python codebase -- than the kind of lambdas that you can't easily write in Python. But I like the Coconut approach of but providing a more concise syntax for the kind of lambdas Python already supports.
A lambda can only contain a single expression, by "full anonymous function" I'm guessing hexane360 means multiple statements. You can't put a for loop or a context manager in a lambda for instance.
Well you might be able to if you add a bunch of named function combinators wrapping these, but definitely not with only lambdas, unless you define your combinators using `ast`, which I think would let you define statements via expressions.
This is an impressive effort, but one major problem is that the Python runtime lacks efficient support for tail calls. I see that there is an annotation for self-recursion, but this is only a special case. Scala has a similar problem with the JVM. Still, it does permit a nice functional syntax above perhaps lower-level imperative code.
Your function in the tail recursion optimization exemple is actually not tail recursive. Factorial needs an accumulator parameter to get that tail call.
The example algebraic data type isn't an algebraic data type at all. It's a single type with a single type constructor. How is this different from a regular class?
It looks like data declarations get translated to immutable subclasses of collections.namedtuple in such a way that you can match on the resulting values. So it's really less like an algebraic data type and more like one of Scala's case classes.
When I used namedtuple that way it tended to blow up on me, because namedtuples implement 'dunder' methods to act as sequences, which is not what I expect for some random algebraic datatype. This makes bugs manifest as very head-scratching behavior. Nowadays I use my own struct type builder with only the equivalent of 'derives Show' and such.
Second, the partial application. Think of partial application
as lazy function calling, and $ as the lazy-ify operator,
where lazy just means “don’t evaluate this until you need to”.
In Coconut, if a function call is prefixed by a $, like in this
example, instead of actually performing the function call, a
new function is returned with the given arguments already
provided to it, so that when it is then called, it will be
called with both the partially-applied arguments and the new
arguments, in that order. In this case, reduce$((*))
is equivalent to (*args, **kwargs) -> reduce((*), *args, **kwargs).
I'm all for syntactic sugar, but my experience with teaching programming is that Python is easy to pick up because of the lack of weird operators. Decorators are an obvious example where many struggle; and this looks like another.
It will take someone cleverer than me to come up with an alternative solutions; I suspect they chose "$" because it isn't an operator in Python and you have to do a lot of partial application, so the logic was probably "short is better". But having something more intuitive would be great (more pythonic?).
List/dict comprehension, list slicing, * and in arguments. Comma for tuples (which make sense to me, but add a lot of magic, especially since, in my experience, beginners often assume that the parentheses are what make tuples). There's many more examples, but those are the ones that immediately spring to mind.
Reading actual production Python code is not any easier than many other languages and certainly far from "executable pseudocode"
That's fair - * args and * * kwargs does feel like a hack, although denoting "zero or more" as * has a bit of tradition. I'd have to disagree with list/dict comprehension, pretty natural considering e.g Javascript/JSON also use [] / {} for each, respectively (Obviously completely subjective).
I also get that there's only so many symbols to pick from. Would have been cool if it was *, which is already used for arguments.
I think my larger point is that even the small decisions when making a language matter. There isn't really right/wrong or better/worse, but IMO Python operators feel more cohesive/uniform than Ruby or Perl. Also, describing how operator usage feels is hard.
I'd suggest fixing the front page, which claims ADT support which seems untrue, and the tail recursive example isn't tail recursive at all. Errors like that make this too easy to dismiss.
Have you ever looked at Perl 6, by any chance? We have most of the features you list, except for instance Tail Call Optimization, but we'd love to have someone implementing it!
I think the purpose of coconut is not to give the world new functional programming features, but give you "real" functional programming with access to pythons vast library.
Doesn't really help in this context. What's the difference between "pythonic" and "idiomatic"?
Does Coconut being pythonic mean it's written in idiomatic Python (then why is it a different language?) or does it mean it's written in idiomatic Coconut?
They just mean "this language looks in general like Python." You're overthinking it.
It's like saying that JavaScript looks C-ish or Java-ish. It definitely doesn't look exactly the same; if you see e.g.
var result = [].slice.apply(vals, 0);
then you can be highly certain that that's JS and not Java or C. But the idea of having blocks of code enclosed in curly braces, which are formed out of statements usually delimited by semicolons (except for special forms like if-statements and for-loops which don't need to be followed with a terminal semicolon), etc. is very much a C-style thing.
Pythonic is just short for idiomatic Python. It's a different language presumably because it introduces some new language constructs that are not compatible with Python. Perhaps they're saying it's 'Pythonic' because of some loosely defined notion that the new constructs feel and behave in the same way that Python does, but that's just speculation.
Python has a very strong set of conventions, beliefs about what 'good python' looks like/what is in the spirit of the language. It's a common term in the python community.
I've mentioned this before and I'm still curious to hear more.
> Barry Warsaw, one of the core Python developers, once said that it frustrated him that "The Zen of Python" (PEP 20) is used as a style guide for Python code, since it was originally written as a poem about Python's internal design. [0]
I did a quick search for more background but didn't find anything.
I've never understood why that phrase applies to Python when the language still has the map, filter, reduce, etc. functional functions. Generators and comprehensions completely removed the need for those, yet the functions remain even in Python 3.
On the subject of printing, there are actual reasons for having this duplicate functionality. The main way to print is of course the print function in three, or print statement in two. The print function was lacking the ability to flush the buffer before that was introduced in three. You would have to call sys.stdout.flush(), whereas now the print function includes the boolean "flush" keyword argument. The second way to print is to call sys.stdout.write(), which does not automatically append a newline. That functionality was implemented in the print function with the "end" keyword argument. You can pass it an empty string in place of the newline.
So for most use cases, print() is just fine. Sometimes you want finer grained control, and for that you would use the sys module.
Python 3 at least changes the semantics of map and filter so that they are generators.
Lazy sequences are the sort of thing that you don't care about until you need to process a ten-million-line file (or whatever) and suddenly find that your program is slowing down for pointless memory allocations up-front -- then they become unbelievably important.
Yes, you are absolutely correct. In our production code, which is currently running 2.7, we make extensive use of the itertools versions of those functions (imap, ifilter, etc.) for this reason. The itertools functions behave exactly like the Python 3 builtins, as iterators. The memory footprint is minimal, and they are just overall faster.
> Generators and comprehensions completely removed the need for those, yet the functions remain even in Python 3.
Hmm, I had never thought about it like that.
Are there any articles, etc. you recommend about the differences or the advantages of one method over the other? It'd be nice to look into this more.
- Toolz (http://toolz.readthedocs.io/en/latest/) provides the same primitives from Clojure (including partial functions, compose, juxt, etc)
- Pysistence (https://pypi.python.org/pypi/pysistence/) provides immutable data structs
- For parallel map you can use https://pypi.python.org/pypi/python-pmap/1.0.2
- TCO is achievable w/ a decorator regardless of AST manipulation (like Clojure's `loop/recur` construct)
The only thing left is pattern matching and more powerful destructuring (Python already has limited support for it), I guess.