> When dealing with functional programming (e.g. pure functions, immutable data-structures, referential transparency), sooner or later you need to deal with recursive algorithms and data-structures. Usually tail-recursive, so they take constant stack space, but recursive nonetheless. And when dealing with recursive algorithms or data-structures, you end up wanting laziness, because otherwise you cannot express the algorithms that you want to express.
This might be tangential to your point, but tail recursion doesn't really work with laziness, e.g. foldl in Haskell takes linear space.
If foldLeft is strict, like in Scala, then it needs constant stack space, but then you can't short-circuit it so you need to traverse the whole list to return a result, which might lead to O(n) space depending on what you want to get out of it. And by tail recursion I'm not necessarily referring to the actual call-stack. Maybe I'm a little loose with the terms here.
Tail-recursive algorithms are those that can use constant memory (stack or heap) and which in a language like Scala can be translated to usage of a loop or a trampoline, whereas the actually recursive algorithms are those that really need some sort of stack to work and that grows directly proportional to the input size. Usage of a stack is the definition of recursivity from algorithms books, like Cormen et al. For example doing an in-depth traversal of a balanced tree will really, really need a O(log2 n) stack, regardless if the evaluation is strict or lazy or whatever language tricks you can pull.
This might be tangential to your point, but tail recursion doesn't really work with laziness, e.g. foldl in Haskell takes linear space.