While definitely great proof of:
1. The author knowing a lot of the language's functional features
(permutations, mapping, zipping, filters)
2. How powerful functional programming is when solving problems
Arggh, every single time someone posts an incredible cool small piece of code that solves a difficult problem, comments like this appear.
This whole HN attitude of pristine code is killing me. This is a piece of code in a GIST and solves the problem in three lines, it is clearly not intended for production use in a large code base.
Please stop searching for something wrong and enjoy what is presented in all of its glory.
The difference here is that when you read the title you think: "wow, scala features allows solving interesting problems in few steps, let's see". And the you realize that it's a cheat. This code is literaly squeezed into those 4 lines, full of less common idioms, and quite obscure.
It should be called "0,5kb scala contest" than "solving x with scala", to avoid misunderstanding.
Ironically, the article linked, "Writing good code: how to reduce the cognitive load of your code", itself has a high cognitive load. It's grammatically irksome. The opening sentence has no verb, "Low bug count, good performance, easy modification." Further it uses poor non-examples of the points the author is trying to make. For example, Yoda conditionals are quirky, but do not increase cognitive load. They were also called out as a good habit in Code Complete, the very book the author sites as the long form of what he's writing about.
OTOH, the gist has a low cognitive load provided one understands funtional notation. I can look at his four lines of code and see that this is a correct solution, because each atom of the function can be reasoned about separately and then composed into the whole. The same is much more difficult with the looping solutions that have been presented as peers to this comment. These solutions have a higher cognitive load because proving correctness requires a full trace of the algorithm, while keeping all of the variables in working memory as you go.
tl;dr; The OP presented a clear and clever solution. I'll allow it. ;)
This is honestly pretty clear to understand - there's hardly anything there other than those general functional features you listed.
I think there's a difference between the cognitive load of the solution and the cognitive load of the implementation. The cognitive load of the general solution for queens is somewhat high, but that's why it's a well-known puzzle. But this implementation is practically a description of the problem itself. It's literally selecting (filter) from all queen arrangements (permutations) where the queens aren't conflicting (described via map and zipWithIndex).
Such tight structures might be difficult to parse when the patterns used are still relatively unknown. However, once applied more widely, read more often and more familiar, you might start to think the longer version (which undoubtedly relies on ambiguous naming) becomes unreadable.
Said in other words: 15 line solutions cannot break into the higher-order pattern-recognition space, since these solutions rely on context specific naming.
Naming is one of the most difficult parts of software engineering, so we should try to rely on contextual patterns in order to minimize naming altogether.
Indeed, one thing I initially liked about Prolog was that you can't deeply nest expressions due to the way evaluation works. Simple examples looked nice and explicit. But, you quickly run into the problem of having to name 15 intermediate stages of a harder computation, and suddenly all your names are L3, L4, L5, ... which makes the code hard to understand again. So, complexity is complexity in the end, and you can pick your poison.
The same issue arises when factoring Forth code into tiny "words" (one of the advantages of a concatenative language): you have to name sequences of operations that don't have an obvious problem-space concept behind them.
Here's < 15 lines, clear, but not functional programming :)
#!/usr/local/bin/perl
#borrowed from Nick Holloway <alfie@dcs.warwick.ac.uk>, 27th May 1993.
my $size=8;my @cols=(0..$size-1);
sub nextq {
(print( "@_\n" ),return) if @_ == $size;
foreach my $col(@cols) {
next if grep($_ == $col, @_);
my $row=@_;next if grep ($_ == $col-$row--, @_);
my $row=@_;next if grep ($_ == $col+$row--, @_);
nextq(@_,$col);
}
}
nextq();
Not sure if it's slower or more resource intensive than the Scala solution. I'm sure it could be golfed down to be smaller than the Scala counterpart. It has fewer characters already, even with the comments intact.
No. Golf is a game where the winner has the lowest score. It's reused as the term for a programming challenge to implement an algorithm with the fewest number of characters. See https://en.wikipedia.org/wiki/Perl#Perl_pastimes .
This is no more indicative of Perl than the Obfuscated C contest is indicative of C.
Not at all, but it is representative of the best of Perl culture: smart people who enjoy playing with unexpected corners of a complex system, while remembering the difference between "work" and "play." See Ton Hospel's insane Roman numeral calculator: http://perlmonks.org/?node_id=594299
Perl being big on "more than one way to do things...", semicolons are easy to discard:
sub nextq {
my $sz=8;
(print("@_\n"),return) if @_ == $sz;
foreach my $col(0..$sz-1) {
my $row;
if (grep($_==$col,@_)) {next} else {$row=@_}
if (grep($_==$col-$row--,@_)) {next} else {$row=@_}
if (grep($_==$col+$row--,@_)) {next} else {$row=@_}
nextq(@_,$col);
}
}
nextq();
The `$next` ugliness is due to not being allowed to use the `next` inside a one line conditional. There's probably cleaner ways of doing the above though. But that was only a quick hack.
sub nextq {
my $sz=8;
(print("@_\n"),return) if @_ == $sz;
o: foreach my $col(0..$sz-1) {
my($u,$d)=(scalar(@_),scalar(@_));
for (@_) {next o if($_==$col or $_==$col-$u-- or $_==$col+$d--)}
nextq(@_,$col);
}
}
nextq();
Personally I'd consider 1 and 2 character variable names as a form of obfuscation. eg what's the point having a "pseudo-constant" for size if it's named in such a way that isn't immediately obvious that it refers to the size.
Also, this is (I think, I'm not 100% up on Scala), insanely inefficient. Usually when people discuss the n-queens problem, they do a backtrack search, stopping as soon as some queen conflicts with an earlier queen.
You can of course then go on and get increasingly clever, but I think you should at least aim for backtracking.
Those are bread and butter of functional programming, like for loop, break and continue in imperative programming. There are really no advanced features here; not even a single monad!
For someone completely unfamiliar with FP, the cognitive load might be high. However, even basic familiarity with FP constructs is enough to understand and appreciate the code.
Maybe terse Scala solutions compare to longer C solution, like terse math equations compare to longer prose text? People who are not used to greek symbols, read equations in the same speed and mindset like english text and get confused quickly. Reading requires a different mode.
I would always prefer a map/fold to a for-loop. Removing the "i" and its book keeping reduces cognitive load. However, a Python list comprehension often becomes too big an expression if you use too much filtering. Extracting the filter into its own function an giving it a good name compartmentalizes the complexity, though. All in all, it is quite subjective.
If you are used to premutations, mapping, zipping, filters the code becomes quite readable. I'm pretty sure there is C code which confuses a Haskell programmer but would be considered "clear and readable" by the average C programmer.
This has nothing to do with functional programming. Here's an iterative Python solution that's almost a direct translation but uses explicit iteration and avoids `filter`/`map`/etc.
from itertools import permutations
def n_queens(n):
for p in permutations(range(n)):
if len({c + i for i, c in enumerate(p)}) == len({c - i for i, c in enumerate(p)}) == n:
yield p
Python 3's `yield from` lets you write that in one statement:
def n_queens(n):
yield from (p for p in permutations(range(n))
if len({c + i for i, c in enumerate(p)}) ==
len({c - i for i, c in enumerate(p)}) == n)
It also illustrates how much neater list comprehensions can be in Python versus map/filter since its lambda functions are fairly verbose (even though there's a straightforward correspondence between them):
Python's iterators are mutation-based, so not really functional. `yield` is based around pause-resume of function execution, so not really functional. `permutations` is just a function that wraps a list (or iterator coercible to one (which mutates the iterator)) and returns a (mutable) iterator.
A `for` comprehension desugars to a `for` loop with `yield` if it's a lazy comprehension, `append` if it's a list comprehension or `add` if it's a set comprehension, which of course involves mutation.
Basically nothing here is mutation free. That, at least IMO, means it's not functional.
You seem to be confusing interface and implementation. Lazy lists and for comprehensions appeared in functional programming languages first; and that Python code looks to have borrowed them.
Addition appeared in imperative languages first. Does that mean addition is imperative?
It's great that functional languages were first to catch onto these ideas, but that doesn't mean that anything else that uses them becomes functional. The interface isn't even functional, since in code like
foo = iter((1, 2, 3))
{x for x in foo}
`foo` is mutated. No "true" functional language would accept that ;).
Addition, as implemented by Python, is functional. There are no side effects. A fully imperative encoding of addition would be eg. the x86 add instructions. John Backus gave us "functional" arithmetic with Fortran, but realised all values should should be treated this way, not just numbers.
I take your point about the iterators being mutated and the loss referential transparency. That's certainly not functional programming.
> Addition, as implemented by Python, is functional.
Sure, but Python wasn't the first imperative programming language. (An aside: that's not strictly true anyway; the error thrown when you do 1 + [] is a side effect.)
There is no well defined definition of functional programming. Since almost every mainstream imperative language supports higher-order functions, I would suggest that criteria is inadequate. A better definition is programming without side-effects.
Not in Haskell and other pure languages. Pure functional programming is used to construct larger programs from smaller ones, which only when finally executed by the runtime produce output.
It is not easy describing pure functional programming in two sentences. I encourage you to read up on it yourself. Again, IO is not a side-effect in pure languages such as Haskell. And again, the absence of side-effects best defines functional programming, not any particular feature, which imperative languages can and do steal.
I think a more reasonable comparison is 4 lines of code you have to think hard to decipher but is correct, vs 40 lines of code that has a bunch of bugs and edge cases and is just as jibberish but 10x more code to decipher. Shorter solutions often encode the authors insight from thinking about the problem, and longer solutions encode are more of a history of the path an author took while just hammering on it without any particular insight until it worked good enough.
it is collections api which has always been very well documented, as opposed to some scary "functional features". Nothing too advanced and absolutely no hacks
permutation, mapping, zipping and filtering are all completely idiomatic scala.
There is next to no cognitive load in this 4-line solution. Brute-force attack on every permutation. I would say that anyone find there to be an excessive cognitive load would be someone that is not familiar with scala nor functional programming.
8-queen puzzle in 1 line of Python with pretty print
_=[__import__('sys').stdout.write("\n".join('.' * i + 'Q' + '.' * (8-i-1) for i in vec) + "\n===\n") for vec in __import__('itertools').permutations(xrange(8)) if 8 == len(set(vec[i]+i for i in xrange(8))) == len(set(vec[i]-i for i in xrange(8)))]
I tried to make that a little more readable, and converted it to py3 -- This one produces the same output with python3, as your one-liner with python2:
#!/usr/bin/env python3
from itertools import permutations as p
for vec in p(range(8)):
if 8 == len(set(vec[i]+i for i in range(8))) \
== len(set(vec[i]-i for i in range(8))):
print("\n".join( '.' * i + 'Q' + '.' * (8-i-1) for i in vec),
end="\n===\n")
The string construction could/should probably also be refactored a bit, but at least this version I have a hope of following the logic... :-)
Ah! That's similar to the solution I used in 1980 to solve this problem for a Byte Magazine contest. Except I trimmed the search tree by aborting the permutation recursion each time the condition failed in a previous step.
It took 21 minutes to find and print all the solutions, if I remember right. But that was in Basic on a machine that ran at 10MHz.
Here's my Python3 version (4 line nQueens function):
from itertools import permutations
def nQueens(n):
return (p for p in permutations(range(n))
if (len(set(c+d for c,d in enumerate(p))) == n and
len(set(c-d for c,d in enumerate(p))) == n))
for num,solution in enumerate(nQueens(8)):
print("Solution #", num+1)
print("\n".join((" ".join("Q" if i == col else "-" for i in range(8))) for col in solution))
each column is a row number, and the integer is which column the queen is in, or vice versa on rows and columns.
,or print how many solutions
$queens
92 8
$ gives the 'shape' of the array, that is 92 rows (solutions) x 8 columns (8-queens problem).
I always return to J, because I enjoy mathematics and functional programming. Like math, you work with symbols you learn to think as you code in an interpreted window, somewhat like when working in the Lisp REPL. Ken Iverson created the APL language in 1960, and won an ACM Turing Award and gave a lecture 'Notation as a Tool of Thought', which sums it up better than I could ever hope to express it [3].
J was a collaboration between him, Arthur Whitney (of finance's k language, and kdb fame), and Roger Hui, in an attempt to improve on APL, and to do away with needing a special character keyboard. It is under the GPL3 license today. J uses the ASCII character set, and is very functional and composable.
Somehow J is shorter? I need someone who speaks both J and APL to tell me if they are the same.
Never learned J, but I always return to APL. I guess J to APL is like latin alphabet to chinese characters. You don't need a special keyboard for APL, just like you don't need a special keyboard for Chinese. Sometimes I really don't understand why they started J. Native English speaker's mentality? We never needed special keyboard for diacritics, let alone east asian languages. With an input method for APL characters, J does not even save the number of keystrokes.
I was wrong. The two one-liners for J and APL are different. The original paper [0] gives the original APL implementation of the above J one-liner.
I'm not going to delve deep into the algorithm, but the following quote from the paper fascinates me.
It is voracious for work space. For concreteness, consider
the case n←8 . The intermediate expression p[;c] has shape
(!8),(2!8),2 , requiring 9031704 bytes on IBM-based APL
systems.
I am not a J guru, but that is just the concise implementation. There are others in the references previously cited. J is an interpreted language, and when doing db stuff, it memory maps files for speed. I know this is not the same, but I will now look into it, since I haven't done any J tracing unless a program blows up.
I also use Dyalog APL,and I don't have a special keyboard. The symbols are MORE like mathematical symbols to me. It's just that it is easier to share code in J's plain ASCII text. Some systems still don't show APL characters correctly, or the user doesn't know how just yet. I was also trying to study K and Kona, so J was the better fit.
Is it possible to programmatically transliterate APL code to J? It would be great in either interactive or scripting to try out the J interpreter, so my code
1+2-3×4÷5
could be converted to
1+2-3*4%5
automatically. I guess only Dyalog APL supports function trains, and J has more modern features. It would be interesting to use J as a backend. I really wish GNU APL could support function trains soon.
I haven't run across any in my J studies, but J has evolved away from base APL, so I think it might be possible to do a direct tranliteration, but would miss some niftier ways of doing it in J. I am merely a student of J and APL, though fascinated every day by both of them. I usually go back and forth from Dyalog APL to J depending on the task or the read that sets me off on discovery.
There is no search solution to N-queens, so any code doing search is poor and wasteful.
I am not going to spoil the fun by giving you the details here, just the hint: start on the right square, depending on whether N is even or odd. Then place successive queens in a knight jump pattern.
It is fascinating to me that whoever invented chess a long time ago may have been aware of these subtle complementary relationships between the different types of moves?
Well, here's a fairly compact Haskell function that should be reasonably efficient (no fancy stuff mind):
queens n = q n n [[]] where
q 0 m ss = ss
q n m ss = q (n-1) m (concatMap (\s->(map (:s)(filter (f 1 s) [0..m-1]))) ss)
f _ [] a = True
f n (b:s) a = a /= b && a /= b+n && a /= b-n && f (n+1) s a
main = print $ queens 8
import Data.List
queens n = q [] [0..n-1] where
q s [] = [[]]
q s as = concatMap (\a -> q (a:s) (delete a as)) (filter (f 1 s) as)
f _ [] a = True
f n (b:s) a = a /= b+n && a /= b-n && f (n+1) s a
main = print $ length (queens 8)
Undoubtedly functional, the question is whether it is improved eg. by using folds instead of explicit recursion, or replacing concatMap with monadic join.
I think many may take your response as a bit off-the-cuff, but I share this sentiment. You could even go so far as to write the longest C program you could dream of to solve it, then simply feed it into
tr -d '\n'
and call it one line! Again, many will find this pedantic and unsatisfactory but programmers should really be more precise in what they mean by lines of code.
What I think most people are really referring to is information content. To achieve the same program in shorter length, the rest of the information has to lie somewhere else (in the scala implementation or nqeensolver.h). We can increase the density of the information needed to specify a program, but not decrease it.
I actually just wrote something about this concept yesterday:
I find this type of code an anti pattern of how good code should be. This solution has a high "cognitive load" imho: http://chrismm.com/blog/how-to-reduce-the-cognitive-load-of-...
I'd much rather see a 15-20 line solution that's clear and readable, than a 4 line solution that I have to reverse engineer.