The people in the horrible, horrible world of enterprisey OO programming call this Aspect-Oriented Programming. It's often accomplished with postcompilers or source transformations and whatnot, because the languages used aren't Haskell, but it boils down to roughly the same thing.
AOP never really caught on because the advantages (keep logging and error handling separate from the business logic) were too small for the extra mess it added. Notably: difficult debugging, plus really weird COMEFROM-style behavior[0] if some well-meaning colleague added a little too much magic to the "aspects" (e.g. the automatically mixed-in logging code). The function suddenly does stuff that you never programmed. Good luck hunting that one down!
I strongly suspect this style of programming has similar downsides. Anyone tried it in large projects?
I have some monads from when I was less experienced that are screaming to be turned into that. I'm keeping them on my to-do list for now, but it would be a gain. (But I have a logger to write, and will probably adapt to something like it.)
The main downside is complexity. It gets hard to reason about it fast. As types depend on more things, and grow bigger, error messages are hard to read, parenthesis abound, and people really start switching off. Type synonyms help here, but are not a panacea. (A similar problem for your "hard of debug" complaint.)
The COMEFROM style is mainstream on FP, so people are used and expect this kind of thing. That means that your functions better support doing stuff you never told it to do, because it will, you know it, and you'll adapt accordingly. It's normally not a problem, notice the "normally" there.
Note that there's a bit of a pun here. "Finally" tagless isn't really any less initial than free monads are. The final encoding would give us something codata like, not just described via a typeclass. That said, both are still "initial" as in "initial algebra".
"Initial" usually means "the initial algebra over a functor". This means that for some functor f the algebra `i : f (Initial f) -> Initial f` is initial in the category of f algebras. This means that for any X and `g : f X -> X` there's a universal function `universal g : Initial f -> X` such that `g . fmap (universal g) = universal g . i` at type `f (Initial f) -> X`. Or, we have a function
universal : (f x -> x) -> (Initial f -> x)
and we can actually define Initial in this way
universal' : Initial f -> (f x -> x) -> x
newtype Initial f = Initial { universal' :: forall x . (f x -> x) -> x }
In "strict Haskell" where we can act only finitely, we construct values of `Initial f` only by slapping finitely many layers of `f` on top of one another. For instance, when `f` is `data ConsF x = NilF | ConsF Int x` we can make a list [1, 2, 3] like
Initial $ \join ->
let nil = join NilF
a : x = join (ConsF a x)
in 1 : 2 : 3 : nil
In other words, Initial things are described by their construction.
Clearly, this relates directly to initial objects constructed via, e.g., Free, since it does roughly the same thing. Free emphasizes the notion of layering things atop one another. In Lazy Haskell we can still use this layering to construct non-initial objects (more to come below) but if Free were transported to Strict Haskell it would clearly only construct initial things.
---
So what about Finally Tagless?
class List l where
nil :: l
cons :: Int -> l -> l
We're still going to be constructing values of `List l => l` by application of finite layers! If anything, we're more stuck to this process now.
cons 1 (cons 2 (cons 3 nil))) :: List l => l
If we swap this out for explicit dictionary passing we can see that we're missing an argument like
data ListD l = ListD { nil :: l, cons :: Int -> l -> l }
\d -> cons d 1 (cons d 2 (cons d 3 nil)) :: ListD l -> l
and also that `ListD l` is equivalent to `ConsF l -> l`. It's really the same thing as the free method and is again operating initially.
---
So what does it take to make something "final"? A final coalgebra of `f` would be an object `i : Final f -> f (Final f)` such that for any X and coalgebra `g : X -> f X` we have `universal g : X -> Final f` such that `i . universal g = fmap (universal g) . g` at type `X -> f (Final f)`. Or,
universal : (x -> f x) -> x -> Final f
which again can be used to define `Final f`
data Final f where
Final :: (x, x -> f x) -> Final f
No longer can we define things by how they are constructed, now we must define them by how they are viewed. This opens up the doors to new kinds of structures, even in "Strict Haskell"
natsFrom :: Int -> Final ConsF
natsFrom n0 = Final (n0, \n -> ConsF n (natsFrom (n + 1)))
I like the ideas of the article. An abstraction over IO and being able to distinguish between inherently serial operations and parallel ones on a different level are good ideas.
I do not think this article exemplifies these benefits. Instead, it focuses much time on implementing a toy DSL. I want to see what Free can do but IO cannot (functionally or stylistically).
What is really cool about this approach is that, unlike the counterpart design patterns you might find in an imperative OO language such as the Interpreter and Command patterns, within the strong-statically-typed functional languages we can rely on the guarantees the type system gives us for abstractions such as the Free Applicative to allow for optimizations that would be merely convention in a less type-enforcing type system.
While ideal from a theoretical perspective, as a practical matter, can you imagine renaming a 10 GB file by first creating a copy of it, and then deleting the old version?!?
That kind of inefficiency is not practical for most real world programs!
To solve this problem, we can write an optimizing interpreter, which is a special kind of interpreter that can detect patterns and substitute them with semantically equivalent but faster alternatives.
HtDP changed the way I look at programming. But today I'm a diehard OOP fanatic. I want code that reflects the way I think, not code I have to work to understand.
I did not know but am not surprised to find that people write procedural Haskell. Procedural is how we all start out thinking about things. You do A, then B, to reach goal C. It takes time and experience and deliberate practice to improve upon that way of solving problems.
Functional moves the code in the direction of math. OOP moves in the direction of the domain. Math is harder to understand than domain logic, you will inevitably hack together an object system on top of your FP in order to implement domain logic.
Both styles have properties that are work better for certain domain concepts. What's nice about modern programming languages is that they build in primitives so you can use whichever style fits the concept you're fleshing out.
But trying to do everything functionally is ultimately counter-productive, in my opinion. It's the wrong format to declare high-level domain logic in, because high-level domain logic is that which is closest to human thought, not the underlying math.
Any FP 'architecture' will pretty much be object oriented. Math may treat state as ugly cruft, but humans need to group information close to where it's needed and in ways that make sense, that means state. Don't let the quest for mathematical beauty get in the way of solving your problem.
I think your marriages of OOP to Domain Logic and FP to Math are incredibly off base. You can most definitely express domain logic in FP languages. The difference between FP and OOP to me is declarative and immutabe versus imperative and mutable. You could also have declarative OOP but in practice I've found most OOP code tends to be objects wrapping up procedures. But saying you can't express a domain properly in FP is laughable.
I would discuss the points you're making but your tone has me assuming that any attempt to do so would devolve into a fruitless argument. I'll just say that there's more than one way to look at the differences between programming paradigms.
Most bugs I make are about state being out of sync - in fact every bug that is not because I misunderstood a problem is because state is out of sync.
In OOP programs, every object usually has it's own state, and it is updated all the time. If I can describe the world as one big hash-map, and my whole program can be while(true){writeToScreen(render(appState))}, then I'm very happy.
Haskell, which you seem to dislike, has a lot of syntax and some new concepts you need to pick up to get going. I recommend you to try out something that is extremely simple, like Clojure.
In Clojure there are not many things you need to learn before being dangerous. The data structures are lists '(), vectors [], hash-maps {} and sets #{}. There are functions (fn [] ) and macros, but you don't need them.
When coding functionally you can be sure you don't fuck something up by changing some state somewhere. And you don't need to mock data. And you don't need to do ORM. It's freaking great.
But I don't need Haskell or Clojure or anything like that to solve my problems. Ruby works just fine for me. I have lists, vectors, hash-maps and sets in Ruby. I can data pipeline simply and idiomatically. I don't work with math-heavy domains, so I don't need heavy math (type system) to write my programs.
When I have bugs in my code it's because I misunderstood the problem domain. State is missing somewhere, state that was unaccounted for and so is running loose in the code. The process of fixing the bug also illuminates what I was missing about the domain. Working on a program is the process of tightening the code around the domain.
> If I can describe the world as one big hash-map, and my whole program can be while(true){writeToScreen(render(appState))}, then I'm very happy.
I want my code to be flexible, and I only want to represent concepts once. I want to be able to interact with it on the command line, on the web, as an API. To do this I need to manage all the different ways other systems can get at the domain logic because otherwise I'm reimplementing parts of the system inside other systems. My program needs to be self-contained.
The best way I've seen to write and manage this kind of flexible code is with dynamic typing and OOP. Dynamic typing isn't a necessity, but it is a big help. OOP just makes everything sane.
"I don't work with math-heavy domains, so I don't need heavy math (type system) to write my programs."
I don't think you know what you're missing.
Type systems can be used to enforce pretty arbitrary things. Representation of data can be handled fairly well automatically, so types describing representation are only marginally useful (mostly where we care about interchange or care a lot about performance). However, if you make your types domain relevant, they can help you with domain relevant things. This can be simple - "I don't want to pass a bid price where I expected an ask price" - or it can be surprisingly sophisticated; I was able to ensure that certain actions were only taken on the correct threads, checked at compile time, in C - this helped me tremendously in refactoring when I discovered some piece of logic needed to live somewhere else. I really can't imagine doing that work without type checks, and it wasn't the slightest big "math heavy".
Can you explain what you mean by OOP? I personally think FP is a misnomer, it (as an architectural style) should be more properly called "algebra-oriented" programming (where algebra means general algebra and category theory). You basically design API based on some properties (equations) between objects.
If that's what you end up with, it's fine, but I don't think it should be called OOP, since this is not how OOP was understood in almost any OOP language (in particular, interfaces won't let you express equational relationships).
What I mean by OOP is representing domain concepts as objects with state. The objects merge interfaces (methods) with state. The objects and their relationships are the primary focus and the code is structured around them.
FP takes a data-first approach, making program flow the primary focus. State is, as much as possible, turned into data and otherwise squeezed out of the picture.
FP programmers go to lengths to eliminate state that I would consider excessive, only to recreate that state later when they have to start reasoning about the whole system.
As a diehard Rubyist, I find Ruby's semantics to be perfectly suited for any task of abstraction you want to throw at it. Just throw the errant code into a module. Turn it into a class when you start to need state.
I'll take an example from my current side project. I'm integrating the Pocket API into my app. I don't have time to rigorously implement a Pocket API consumer, but what I can do is implement the one API call that I need right now.
Instead of taking the time to do everything right the first time, I can simply make a single file with a module and call methods off of that module. As I implement more calls, I'll have a better idea of what kinds of state I'm managing, so I can refactor the code appropriately when the time is due.
It starts off procedural, code that just does things one after the other. It might acquire functional flavors as I work out the data pipeline. But eventually it will turn into proper objects with classes that can be passed around to whatever needs them. Things that would look like major architectural changes from the data pipeline point of view, are simple when you think about them as objects. Just implement the required interface and pass the instances to who needs them.
Just wondering, if you want code that reflects the way you think then how do you think about things and models that inherently don't map into objects?
I'm kind of the opposite: I rarely use objects (as in methods welded to holders of state) because they only fit in certain domains but when they do they are indispensable.
Hmm, interesting question. I guess I don't see how there are a great many things that couldn't be called objects. Particularly anything you can build a data structure around can be thought of as an object. So if your code has data to process, that data can be instantiated into objects.
But in this scenario, I would simply leave the code in an uninstantiated class, called a module in Ruby. I'd organize it as best I can with methods and perhaps submodules. My classes evolve organically as my application grows. Many of my classes start off as modules before I figure out what their state should look like.
I program like this all the time in Haskell. Much of the latter bits can be replaced by so-called "mtl-style" typeclasses (used similarly to Oleg's Finally Tagless idea). I have a brief talk on this at last year's LambdaConf but didn't really give it sufficient space I suppose. I think the whole Prism/Inject machinery can be completely eliminated in this way.
I am not sure you can get rid of IO monad. It's how the Haskell programs interact with the outside world, and I think IO is a primitive that you can't build from other things.
But in metaphorical (architectural) sense, yeah, you can probably get rid of it and use different monadic datatypes for different parts of the program that access different pieces in the world.
However, I am not clear how you then combine/serialize those different monads. It seems eventually you will get something like IO monad again. I am not sure but I think there needs to be some master monad going on in all Haskell programs that (potentially) serializes all the actions to the outside world.
There has to be an RTS interface. Right now this is "there exists a value called main in the Main module of type IO which the RTS magically understands". This is annoying because it moves so much of IO's reasoning into "RTS Magic".
There are better models of IO (some discussion here http://comonad.com/reader/2011/free-monads-for-less-3/) which essentially boil down to "bidirectional communication channel with a Real Machine" and these can provide a foothold for better semantics. Similar ideas drove the first "IO" based on input and output streams (but handles synchronization the way we need).
There's also chance for other ideas of RTS. For instance, we might embed 99% of Haskell into a real-time RTS and have real world interaction be governed by something like FRP.
Yeah, but that system sucked. You essentially sent commands to an external interpreter to do I/O. Simon Peyton Jones calls it "embarassing" in Coders At Work.
Downvoted, and yet - is it not wise to start with a headline that is clear enough to attract a broad audience? If the article really only wants to attract a smaller slice, that's fine, but I asked a valid question (in my opinion.)
Seconded. My first thought after reading the title was that it might be about a post IEEE754 floating point standard, or about a new way to implement floating point calculations in hardware...
I have no problem with reposts, I have problem with using shortcuts that nobody outside domain can possibly know, and then, using the same unexplained shortcut in repost.
Normal thing to do would be understand that not everybody knows what "FP" is (free pascal, floating point?) and use full term in "corrected" repost. Perhaps original post didn't get any attention because people don't care about floating points.
Shortcuts are usefull when you mentioned term 100 times and it's just annoying, but you don't mention FP in title 100 times did you? There should be ban on using shortcuts on HN titles until it is well known shortcut (CPU, RAM, MB, ...)
Title should have enough information that if you stop random stranger on the street, can he understand or guess what is it about? The first thing rubber duck (https://en.wikipedia.org/wiki/Rubber_duck_debugging) would ask is: "what is FP?" and that means title is wrong and deserves zero comments.
Edit: another thing - title is only write once, but read hundred of thousands of times, you are literally wasting manhours here.
AOP never really caught on because the advantages (keep logging and error handling separate from the business logic) were too small for the extra mess it added. Notably: difficult debugging, plus really weird COMEFROM-style behavior[0] if some well-meaning colleague added a little too much magic to the "aspects" (e.g. the automatically mixed-in logging code). The function suddenly does stuff that you never programmed. Good luck hunting that one down!
I strongly suspect this style of programming has similar downsides. Anyone tried it in large projects?
[0] https://en.wikipedia.org/wiki/COMEFROM