I've never understood this. Unless you write a library that you plan to publish, or already have actual cases where you need a more general solution, why spending time trying to generalise code instead of switching to the next task?
My tentative answer is this: someone who uses Haskell appreciates elegant solutions (a.k.a mathematical/functional) and is inclined to write things 'properly' once and they might also idealize that the functions they write will not only solve this current issue, but be useful to others and themselves in other programs ... thus going down the generalization and elegance rabbit hole.
Of course, all of this is purely speculation on my part.
Resuming: the typical Haskeller is a perfectionist.
If I allowed, without blinking, my perfectionist self would ditch every language but Haskell. No other mainstream language can give you more control and purity. For a perfectionist this is opium.
Meanwhile, if you look at pseudo-code written by actual mathematicians or logicians, it's almost always imperative, full of side-effects and global variables. Sometimes they even use GOTO!
One reason is that more general types mean you can write fewer functions, and so the function that you do write is more likely to be correct.
The function `intMap :: (Int -> Int) -> [Int] -> [Int]` can do all sorts of crazy things that are not map. The function `map :: (a -> b) -> [a] -> [b]` can do far fewer crazy things, and just from looking at the type you can say that any `b` in the result list _must_ have come from applying the function to some `a` in the input list.
> you can say that any `b` in the result list _must_ have come from applying the function to some `a` in the input list.
Morally correct... but consider the function `\f xs -> [undefined]`, which can be typed as `(a -> b) -> [a] -> [b]`. (Obviously it could be given other types as well.)
When discussing Haskell and theorems about its types it's common to simply ignore non-termination; if we don't ignore non-termination there's basically nothing we can say about Haskell programs at all.
Interested readers should check out Agda and Theorems for free!
Yeah that's a better one (which I think is discussed (or a similar one) in Theorems for free!), but the wording of OP weasels itself out of that being a problem.
> any `b` in the result list _must_ have come from applying the function to some `a` in the input list.
Since there exists no a in [], the quoted statement holds! I find that really beautiful :)
My take is it's because there are some really great benefits to implementing things more precisely (which usually means "more general" in this sense), and Haskell is more amenable to it than most.
There are many cases where it's worth it, so much so that it's worth at least considering whether a more generic solution is better.
I think the problem is that it's hard to predict how deep a rabbit hole like this gets - so you think it's just a few minutes extra work, but it ends up completely derailing the project.
If you spend some fraction of each task reflecting on how you could've written it 'better', then over time you'll learn to write more of your code 'better' from the start. (You could say making it more general is not always better, and that's true. But it is a win often enough to make it a skill worth cultivating.)
What (non-abstract) client would want you to do this in their own project? Sure, he'll want the people who spent (wasted?) their own time doing this but would categorize you as a non-professional time waster if you do this out of their own pocket and would probably fire you after too many strolls in the 'abstract' realm
Good thing my employers were more enlightened. This is a matter for negotiation like anything else, and depending on timescales may be in the employer's narrow interest.
I've never understood this. Unless you write a library that you plan to publish, or already have actual cases where you need a more general solution, why spending time trying to generalise code instead of switching to the next task?