Hacker News new | past | comments | ask | show | jobs | submit login
Develop the three great virtues of a programmer: laziness, impatience, and hubris (raganwald.com)
319 points by ingve on April 17, 2016 | hide | past | favorite | 72 comments



"To distinguish between things that have the same interfaces, but also have semantic or other contractual differences, we need types."

I don't see that "types" solve this problem, as there are far more varieties of function than lazy and eager, which themselves are not always consistent. Perhaps Eiffel is a better guide for the sort of pre-condition / post-condition assumptions of a function, if you want to go down this route.

https://en.wikipedia.org/wiki/Eiffel_(programming_language)#...


Contracts that are enforced at compile time are types, whether you call them that or not.


Some types are not (and cannot) be enforced at compile time [0]. In fact what makes something a type is not something internal to the type, but its participation in a larger type system or type theory, so in that sense, almost any abstract model of computation could give rise to types, as long as it was sound.

[0]: https://en.wikipedia.org/wiki/Dependent_type


Wait what? The whole point of dependetly-typed languages is that programs you write in them are fully typechecked at compile time.


In a dependently typed language, types may rely upon arbitrary terms that are computed at runtime, and yet you are correct that dependently typed programs are checked statically. The way this is achieved is if a property can only be checked at runtime, then the burden of proof for that property is moved from the typechecker to the program.

The program must verify the property at runtime and create a witness to the proof, which it can then pass to functions to say that it has done the necessary work.

So I guess the answer is yes and no. I would say the whole point of dependently-typed languages is that we can statically determine that somebody is maintaining correctness, be it at compile time or at run time.


Contracts are types.

If they are enforced at compile time, they are static.


I think that the problem here is actually in signaling a wrong interface.

If you in fact want to receive a iterable, then you should not signal that you expect a list.

The keyword yield is in fact a nice language integrated mechanism to create a iterable interface.

In languages that lacking this it resorts to using much more verbose mechanism.


It's great to see generators presented in an engaging and approachable way. I like the layering of examples and comparisons between eager and lazy functions. I don't find generators to be an easy concept to understand for most but the payoff is substantial since they have so many useful applications - not just in JavaScript.


Lazy evaluation is only a win if you're never going to use the result.

Some systems can figure out by themselves when lazy evaluation is useful. Consider this SQL:

    SELECT * FROM tab ORDER BY score DESC LIMIT 1;
Assume there's no index for "score". The brute-force approach is to sort "tab" by "score", then take the first element. This is O(N log N) and requires generating a temporary file. Most SQL implementations are smarter than that, and will make one linear pass, for O(N) time. Some will do that for small LIMIT values greater than 1.


The example you gave is not of lazy evaluation. An "ORDER BY" that's truly lazy, followed by a LIMIT, would behave like Quickselect [1], meaning that the complexity would be O(n), but not as a special optimization and definitely not for the reasons you mention. Sorting in Haskell behaves like this.

When dealing with functional programming (e.g. pure functions, immutable data-structures, referential transparency), sooner or later you need to deal with recursive algorithms and data-structures. Usually tail-recursive, so they take constant stack space, but recursive nonetheless. And when dealing with recursive algorithms or data-structures, you end up wanting laziness, because otherwise you cannot express the algorithms that you want to express.

You mentioned LIMIT. This operation can be expressed in terms of "foldRight", a really, really useful operation for functional programming. But here's the catch: if it's not lazy, then it's not going to work for infinite streams (and it should), it will need O(n) memory and depending on implementation, it will probably blow up your stack. My current language is Scala. And in Scala the "foldRight" implemented on the standard collections is totally useless. Which is a pity really.

Going back to your original assertion, laziness isn't really about preventing a result to be evaluated, although sometimes that's a useful side-effect. No, lazy evaluation is about being able to short-circuit the iteration ;-)

[1] https://en.wikipedia.org/wiki/Quickselect


> When dealing with functional programming (e.g. pure functions, immutable data-structures, referential transparency), sooner or later you need to deal with recursive algorithms and data-structures. Usually tail-recursive, so they take constant stack space, but recursive nonetheless. And when dealing with recursive algorithms or data-structures, you end up wanting laziness, because otherwise you cannot express the algorithms that you want to express.

This might be tangential to your point, but tail recursion doesn't really work with laziness, e.g. foldl in Haskell takes linear space.


If foldLeft is strict, like in Scala, then it needs constant stack space, but then you can't short-circuit it so you need to traverse the whole list to return a result, which might lead to O(n) space depending on what you want to get out of it. And by tail recursion I'm not necessarily referring to the actual call-stack. Maybe I'm a little loose with the terms here.

Tail-recursive algorithms are those that can use constant memory (stack or heap) and which in a language like Scala can be translated to usage of a loop or a trampoline, whereas the actually recursive algorithms are those that really need some sort of stack to work and that grows directly proportional to the input size. Usage of a stack is the definition of recursivity from algorithms books, like Cormen et al. For example doing an in-depth traversal of a balanced tree will really, really need a O(log2 n) stack, regardless if the evaluation is strict or lazy or whatever language tricks you can pull.

Does that make any sense? :-)


Lazy evaluation might also help in cases where you can parallize some work. Even if you know you will need it, the fact that you can defer it can give you a number of optimization opportunities.

Also, if others would like to learn more about the ORDER BY...LIMIT optimization, it's called a top-K sort.


The algorithm discussed in this post isn't actually the Sieve of Eratosthenes; it's an inefficient algorithm with performance worse than trial division. See https://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf


The paper discusses the fundamental difference between the Sieve of Eratosthenes and Trial Division, then goes on to discuss a number of ways that the Sieve of Eratosthenes can be implemented in a pure functional style with greater and greater optimizations.

Since I wanted to talk about laziness and not priority queues, I eschewed trying to write the fastest possible implementation in favour of the simplest implementation that is still recognizable as “cross off every nth number.”

If you feel that the implementation is inefficient, I agree. But if you feel it isn’t actually the Sieve of Eratosthenes in some fundamental way, please explain the difference in a little more detail, it would be instructive to share with HN and me.


The Sieve of Eratosthenes is an algorithm that computes all the primes up to n in O(n log log n). It does this by eliminating composite numbers in a clever way: instead of checking when it reaches the number if it's divisible by any primes seen so far, it crosses off multiples of a prime in advance as soon as it sees that prime.

Your code maintains a nested "stack" of iterators: the outer iterator is a nullEveryNth() that consumes from a nullEveryNth()...that eventually consumes from a range(). There's one for each prime encountered so far. Each iterator passes through a number if it's not divisible by the corresponding prime. That's trial division.

You should be able to observe that your lazy implementation is asymptotically slower than the eager implementation (not just slower by a constant factor, but the running time grows more quickly).

The paper by O'Neill is a good read (although a little dense), you should check it out! https://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf

N.B. I think you added skipFirst to nullEveryNth since publishing this post, but it doesn't help get the algorithm down to the O(n log n log log n) performance that O'Neill describes.


I’m familiar with the ‘Neill paper, and although my memory is hazy, I recall writing something along those lines in Scheme in college when we were learning about its own particular flavour of laziness (IIRC it involved lists where the head was a value and the tail was a thunk).

The thing is... I was aiming for the most literal translation of the human-readable instructions, which conveniently set things up to introduce the bug in using n eager version of `compact`, which segued into discussing the problem of mixing eager and lazy code in a language that uses the exact same interface for both.

The code I gave mimics what happens when a child is taught the method. They look at the list of numbers and count one TWO (cross off) one TWO (cross off)... one two THREE (cross off) one two THREE (cross off)... one two three four FIVE (cross off) one two three four FIVE (cross off) and so on.

It’s quite right and proper to advance from there to talk about how you can do multiplication in a faster way to derive FIVE, TEN, FIFTEEN, TWENTY and so on, and to line up the grid of numbers in such a way that you have faster access to an arbitrary number than traversing the list sequentially, and then you have the performance discussed.

But all of that would encumber the story I was telling.

As for the skipFirst, I added that when I decided to cite O’Neill’s paper, as it made for a literal translation of the introductory explanation she gives of the algorithm.

---

So now, I’m going to write a’proper' version in JavaScript, I look forward to your feedback.


As someone who's also concerned with precision, I'd say your sieve is about 25% as bad as an O(n^2) quicksort. That's not horrible, whether you think it's okay is up to you.


When writing about X, quite often the algorithm that communicates “X” most succinctly ignores considerations “Y” and “Z.”

Sometimes, Y and Z would just get in the way. But then again, sometimes their omission distracts the reader and provokes a lot of bikeshedding.

There is no easy answer, and I often get this balance wrong.

This is, I think, what makes some writers so brilliant: They find a way to present “X” in a clear and strong way without making an absolute hash of “Y” and “Z.”

Such writing is a treasure.


9 comments in this thread as of the time I'm posting this, and only 4 of them got past the title.


I guess you could say they were lazy and too impatient to read the article; but had enough hubris to comment.


Relevant: http://www.amazon.com/gp/product/B0049U444U

(Disclaimer; haven't read past the free preview)


love it!


News aggregators don't play well with tongue-in-cheek article titles. Coming via HN or Reddit or whatever, we aren't familiar with the post author and have nothing to go on except the title, which we must take at face value.


If you want to check how this idea is taken to way more sophisticated levels than this, then check out D's ranges and algorithms. This article only covers the equivalent of input iterators / ranges. You can also find in D sophisticated ways to deal with the last part of the article, regarding how to ascertain the different capabilities of your range/type, in ways that go beyond the traditional type system and OOP concepts.

edit: (also, D's lazy keyword, which performs the transformation described in the article automatically)


It also wouldn't be a complete discussion of the subject without at least mentioning Haskell, in which everything is lazy by default:

    numbers = [0,1,2,3,4,5,6,7,8,9]
    take 5 numbers
    -- [0,1,2,3,4]

    numbers = [0..]
    take 5 numbers
    -- [0,1,2,3,4]

    evenNumbers = map (* 2) numbers
    take 5 evenNumbers
    -- [0,2,4,6,8]

    last evenNumbers
    -- <<infinite loop>>
I used simple list stuff in this illustration, but everything is lazy. I/O most notably (and sometimes problematically). Nothing runs until it's forced[0], and none of this requires any special annotation or plumbing.

----

[0] Yes, I know. But it's subtle, and we're in public here. https://wiki.haskell.org/Lazy_vs._non-strict


Haskell's laziness can also be a notorious source of "space leaks": an algorithm that appears O(1) in space, such as summing up a list of numbers, can actually use O(n) space by accumulating unevaluated invocations of (+) in memory. With more complex data structures, the proportion of expected memory usage to actual memory usage can get even worse.

In larger Haskell programs, I've found that the most challenging issue to debug: "why does my program use way too much memory?".


Practically, the way to go here is: make your data structures strict (with "!") and make your control structures lazy.

So, don't create a list of numbers if you intend to sum it, use a non- lazy data structure.

The trick is that Haskell's common default structures are lazy.


not 100% IT, but: If you have not yet read larry wall's "Diligence, Patience, Umility"[0], I strongly advise you to, it's just.. great.

[0] http://www.oreilly.com/openbook/opensources/book/larry.html


That's great, thanks for posting the link. Larry Wall has a satisfying writing style, the little touches of humour are very well judged.


saved, bookmarked, maybe will be printed :D

disclaimer: Perl fan ha


Just read it - quite good!


Lazy is a win for two reasons. The article talks about the first win: the code can skip evaluating items that are never reached. The more important win IMHO is in time-important areas such as UI/UX and heuristics.

For example with UI/UX and lazy loading, you can use lazy initialization to get your UI/UX on the screen faster for the user, by deferring a bunch of content initialization to after the UI/UX.

For example with heuristics and lazy calculations, you can use techniques such as successive approximations, caching of recent similar results, eventual consistency, and partial filling, so you can provide decent information to the user.


Can cause problems while appearing to satisfy the desire for responsiveness. For example, on the iPhone: recent messages or calls may update in the middle of a touch, so the location of the input (thumb) actually initiates a call or brings up a thread not intended by the original display.

Showing what had been, and then updating the action of a display location, creates a UX of "shifting sands".

Often annoyingly occurs with popup ads, but in an iOS app itself is inexcusable.


I first used lazy sequences in Scheme for a similar purpose, to find successive approximations. I recall lazy sequences like 0, 1, .5, .25, .75, ... and diagonalizations for these purposes.


I find the laziness part being covered in the article but couldn't find anything about impatience or hubris. Am I just reading it wrong or is the title a bit misleading?


The article is talking about lazy evaluation, not lazy programmers. And thus it's not talking about impatience or hubris either, because it's not talking about that quote at all, just using it because it contains the word 'lazy' in a programming context.

Lazy evaluation is probably better named "deferred evaluation", and is more analogous to procrastination, not laziness.


The title is a weak joke -- it's a famous quotation that has one word that is a homonym of the topic of the article.


It's a quote from Larry Wall's Programming Perl.


> If JavaScript was lazy, it would not evaluate 2+3 in the expression ifThen(1 === 0, 2 + 3)

It depends. Haskell is lazy, but I think it would still constant fold that expression.


JavaScript compilers _ought_ to do that too. The example of generating a billion numbers is more relevant, but I didn’t want to use that until I was ready to introduce a lazy version of it.


If Haskell were lazy, then it wouldn't evaluate 2+3, but Haskell is non-strict, rather than lazy, so it can evaluate it, or not.


Haskell is lazy, but the compiler can be smart enough to know that 2+3 is semantically equal to 5 in it's model of computation. This is without violating the lazy nature of the language. So it can (and will I believe, but I haven't checked) fold any such expressions to their constant value when compiling for performance reasons.


Lazy evaluation is what drew me into Haskell a few years ago, when someone showed me how you could use it to beautifully read a file as if it were in-memory, but done entirely as a stream. I thought that was amazing and I was sold.



I should have mentioned that in my initial comment; Lazy IO is beautiful and OK for a simple application, but it's smarter to use a streaming library like Pipes of Conduit for anything big.


Doesn't using function* make laziness part of the type system, thereby allowing us to detect when we're mixing lazy and eager in functions in the same way we might detect mixing integer and floating-point math in variables?


Really enjoyed reading this.

The funny thing about laziness is you have to work hard to then reach a state of meaningful and productive laziness that isn't cutting corners or creating the right balance between risk and technical debt.


Quick poll: who prefers laziness by default. Who prefers strictness by default?


I'm a big fan of taking advantage of laziness and think it's drastically underrated. It's incredibly expressive and it's what really makes Haskell more than just an ML variant.

I gave a slightly rough talk about "Thinking with Laziness"[1] which tries to capture how expressive it can be. I'm a big fan and sad that so many people are willing to categorically dismiss it as a wart in Haskell.

[1]: https://begriffs.com/posts/2015-06-17-thinking-with-laziness...


I prefer strict by default, despite liking fancy typed languages. It's easy to build lazy on top of strict (as long as functions are values), but impossible to build strict on top of lazy. Consistency and reasonability are important, and the runtime performance of Haskell doesn't have either. I currently use Scala and have high hopes for Idris.


I prefer the laziness of PHP and Python. JavaScript threw me off with its eagerness. Took a while getting used to.


In what ways are PHP and Python more lazy than JS?


I can't speak for PHP, but in Python (version 3 at least), a lot of the built-in functions that return iterators, like `map` and `range`, are lazy.


Also in python 2.7 (and I assume py3k stuff too) conditionals also have lazy evaluation if you have the statement:

if False and (fibonacci(1000)/fibonacci(1000)):

  print "Nope"
It will immediately evaluate to False and pring "Nope" rather than checking the (fibonacci(1000)/fibonacci(1000)) portion of the conditional.

EDIT: Side note, is there a shortcut for displaying code snippets within HN posts? Extra returns (what I usually use for clarification by formatting) do not display very well.


Javascript will short-circuit in && and the ternary operator, too. But that's not really the same as lazy evaluation, which both is more general and happens in a different context; ES6 generators, for example, are lazy, which means that any sequence function in which one is used is lazy as well.

Indent your code snippets with a minimum of four spaces and they will be rendered as preformatted, monospaced text. Vertically adjacent lines thus indented will be rendered as a single block.


That's not necessarily "lazy" in the normal programming context, it's specifically short-circuit evaluation in Python. Although as Wikipedia points out: "In languages that use lazy evaluation by default (like Haskell), all functions are effectively "short-circuit", and special short-circuit operators are unnecessary."


PHP too.

(if isset($foo) && $foo == 14)

If $foo is not set the second boolean expression is not evaluated (short circuit evaluation).

In fact, checking if isset followed by another boolean expression is very idiomatic in PHP.


I think there is a bug in that code. It will not print Nope.


You are right. I meant immediately evaluate False and not print Nope.


Just curious re: the findWith() example, how would you find the last number in a list that is greater than the min while still being lazy?


You wouldn't, because you can't be sure you've found the last value satisfying the predicate until you reach the end of the list. To do it lazily, you'd have to reverse the list first, but that's O(n), so not practical with very long lists.


Really nice site for small screens!


is it possible that

for (const element of list) {

should actually be

for (let element of list) {

?

the former one raises "SyntaxError: invalid for/in left-hand side"


The secret to good UX design is to imagine yourself as the user and imagine yourself raging against the hassle of having to do this.


If a user is having to write code just to use your interface, there's very likely a serious issue with your UX that far outweighs whether the developer felt it was worthwhile to use laziness to increase performance or not.


Even better is to talk to users. They may get raged by different things to me to you.


I like to include a bit of "watch someone use your product".

And I mean doing so with their permission, and at your request, but without any prior instruction, and you're not allowed to intervene or supply hints.


Laziness is the mother of necessity...


Even better!

     ifThen(1 === 0, 5)
No need to calculate 2 + 3!


False. You did the arithmetic in stead of the computer, this is more work.

and if instead of addition it was a function call...


Oh but you can become even lazier:

  function compare(list, val){
    return list.some(el => el === val);
  }
Read the documentation (preferably from MDN [1]) for Array methods. You'll get your "laziness" to the next level: .filter(), .some(), .every(), .reduce(), .concat() [vs .push()], etc.

[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: