I find it fascinating that while we mostly understand each other, it seems like we still disagree on what sort of knowledge is useful. There is a weird gap here.
It seems like this is starting with something I basically knew? The visitor pattern in an object-oriented language is used for same things that a sum type and pattern-matching are used for in a functional language.
And then re-explaining it using a lot of mathematical jargon, which makes it considerably more obscure than the more hand-wavy explanation. And then saying "isn't that useful?"
But I was happy just informally knowing that they were equivalent, and I would be happier explaining this to someone else informally, using examples of the same code written in Java (say) and Haskell. I don't feel like the mathematical jargon helped? I guess it's sort of neat that you can explain it that way.
(Thanks for continuing to engage on this. It's a fun conversation :) )
Yes, I suspect the blog post is more interesting if you already have a theoretical background, and are interested in the various ways that theory plays out in applications. I certainly doubt the post was written with the HN audience in mind in particular.
For me, theory is like a compression algorithm for knowledge. The more I can interrelate things, the less I have to duplicate the commonalities, and the more I can focus on remembering (and judging based on) the differences. So I get a lot out of this kind of thing.
(Personally, I use these kinds of transforms all the time -- especially "defunctionalize the continuation", which is lovely for mechanically making a simple recursive algorithm into an iterative one more suitable for something like Java. The theoretical background makes me more comfortable going between different representations and keeping track of precisely what is changing and what is being held fixed.)
I guess it's more about fluency in changing code than fluency in reading or writing code. Dynamics rather than statics. I think I can appreciate that if you're just looking at Visitor or sum types in isolation, it's not worth getting into the weeds over. But it's comforting to me to know that if I need to change various parts of a system, I know exactly what and where my degrees of freedom are. I can redefine the module boundaries one step at a time, and relatively smoothly and slowly shift the mass of the codebase around.
I agree that the "defunctionalize the continuation" refactoring is neat and non-obvious.
My feeling about refactorings is that you can use a tool that does it automatically (preferred, when available) or you can do it by hand (when not). I'm not sure I need to be thinking abstractly in math to do the hand-refactoring though. If I know where I'm starting from and where I want to end up then I can informally pattern-match. But I guess it's a more intuitive approach and it sounds like you prefer to think about it a different way.
Refactoring in small mechanical steps is generally less risky, though good tests can also reduce the risk.
Possibly one difference is that in Haskell, you're close to thinking in math already.
It seems like this is starting with something I basically knew? The visitor pattern in an object-oriented language is used for same things that a sum type and pattern-matching are used for in a functional language.
And then re-explaining it using a lot of mathematical jargon, which makes it considerably more obscure than the more hand-wavy explanation. And then saying "isn't that useful?"
But I was happy just informally knowing that they were equivalent, and I would be happier explaining this to someone else informally, using examples of the same code written in Java (say) and Haskell. I don't feel like the mathematical jargon helped? I guess it's sort of neat that you can explain it that way.