Hacker News new | past | comments | ask | show | jobs | submit login
Modern Functional Programming: The Onion Architecture (degoes.net)
283 points by Roxxik on Sept 29, 2016 | hide | past | favorite | 95 comments



Gary Bernhardt describes a similar architecture using a "Functional Core, Imperative Shell" in his Boundaries talk[1].

"Purely functional code makes some things easier to understand: because values don't change, you can call functions and know that only their return value matters—they don't change anything outside themselves. But this makes many real-world applications difficult: how do you write to a database, or to the screen?"

"This design has many nice side effects. For example, testing the functional pieces is very easy, and it often naturally allows isolated testing with no test doubles. It also leads to an imperative shell with few conditionals, making reasoning about the program's state over time much easier."

[1] https://www.destroyallsoftware.com/talks/boundaries

[2] https://www.destroyallsoftware.com/screencasts/catalog/funct...


At the lowest levels, the pattern can be reversed: you provide a nice functional shell around a little bit of imperative core (e.g. implementing "map" with a loop).


You are referring to, for example: http://clojure.org/reference/transients

Combining the two ideas

Transient imperative logic in the core (5%), Functional mantle (90%), Side-effecting imperative crust (5%).


Yes, because some purely functional approaches cannot beat imperative ones when it comes to resource usage.


Could you give a concrete example?


Most in-place algorithms. E.g. quicksort

You can do merge sort in Haskell asymptotically as well as C, but not quicksort (because you can't mutate things in place).

I am of course omitting things like ST which do give you this sort of ability in Haskell, but I doubt that's what the OP meant by "purely functional".


ST is exactly the same sort of thing junke was talking about, except it also uses the type system to ensure the imperative core doesn't leak into the outside world.


I think the grandparent comment boils down to saying "there's no known persistent data structure with O(1) random access for read and write". Whether you need such a structure (for a cache, histogram, frame buffer, etc.) is up to you.

Some functional languages allow transient data structures (via something like ST or uniqueness types), so you can match the performance of any imperative algorithm at the cost of ugly code.


I was about to cite Okasaki, but I found a detailed answer over there:

http://stackoverflow.com/a/1990580

> Note also that all of this discusses only asymptotic running times. Many techniques for implementing purely functional data structures give you a certain amount of constant factor slowdown, due to extra bookkeeping necessary for them to work, and implementation details of the language in question. The benefits of purely functional data structures may outweigh these constant factor slowdowns, so you will generally need to make trade-offs based on the problem in question.


Hash tables are a big example. You can implement pure associative arrays, and they have some nice benefits, but the best functional implementations still have much poorer asymptotic performance than a basic hash table.

Pure data structures also tend to include a lot of extra pointers. That can create a lot of overhead. A pure list of 32-bit integers will need either 4 or 8 bytes of overhead (depending on whether you're running 32 or 64 bit) for every item stored. A resizable array just needs whatever empty space it preallocates, plus the occasional memcpy when it needs to add capacity. Also locality of reference and all that fun stuff.


Another example: Caching of results, transparent to the user of your function. If your function is otherwise pure you can simply use its parameter and hash them for a key to your result index. Examples where I've used this: Caching of regex results, replacing file reads with modification date polls and read from cache if not changed..


I was disappointed to learn that Quicksort implemented functionally is almost always much much slower than if implemented procedurally, to the point that functional langs use other sorts such as mergesort. What makes it extra annoying is that Quicksort implemented functionally is so damn elegant!

http://stackoverflow.com/questions/7717691/why-is-the-minima...


Neural networks are a good example where mutable structures are the only realistic choice. You're dealing with fully connected networks of millions of nodes, each of which needs to be updated multiple times for each layer at every pass.


the main thing being that side effects still aren't happening in the core, so the core is still referentially transparent.


mmmmmmmmmmmm. Functional sandwich.


This kind of abstraction layer is really challenging though unless you're totally aware of the extent of your imperative code's side effects.


Isn't this already happening, though, on a low-level (i.e. ASM) ?


Yes, but the domain of your computer memory is surprisingly disjoint from the domain of your business logic, and that's really what matters here.

Consider a counter-example, where calling a functional method that takes an object and creates a copy with a new field updated (a classic pattern for introducing immutability to a mutable environment). What if internally the constructor calls a log call or increments a shared resource tally?

Not unreasonable, but in a functional context an update now has weird side effects that creates misleading results.


CakeML, a Standard ML subset, can compile to ASM with mathematical proof of correctness. You know your ASM will be what you expected it to be. There's also tools like QuickCheck, QuickSpec, and SPARK that can automate lots of analysis. Tons of work like this in compilers and static analysis with smart people always improving them.

Good chance your business logic will not be handled that well. So, good to structure it in a way to facilitate easy analysis or optimization by tools that currently exist or are in development. You get long-term benefits.


But the abstraction there is a lot more battle-hardened than your business logic at $COMPANY. Not that compiler bugs never happen, but it's comparatively quite rare.


I understand the value of referential transparency and how it makes "certain" things easy, but saying that it automatically makes testing functional code easy is a myth. Sometimes it does, sometimes it doesn't.

If you want to stick to referential transparency, you can't use dependency injection: you have to pass all the parameters the function needs. None of these can be implicit or belong to a field on the class since that would mean side effects. The `Reader` monad is not dependency injection, it's dependency passing and it comes with a lot of unpleasant effects on your code.

And because of that, functional code is often very tedious to test. Actually, in my experience, there is a clear tension between code that's referentially transparent and code that's easily testable. In practice, you have to pick one, you can't have both.


> And because of that, functional code is often very tedious to test. Actually, in my experience, there is a clear tension between code that's referentially transparent and code that's easily testable. In practice, you have to pick one, you can't have both.

In all my years of software development, I've never encountered a referentially-transparent function that was even remotely hard to test, let alone harder than one with environmental baggage. In fact, being referentially transparent opens you up to new kinds of powerful testing strategies that are nearly impossible if the function isn't, like QuickCheck. (I can't highly recommend quick check enough, it's worth the little learning curve 100x over)


First of all, passing parameters in functions is "dependency injection". And what you're describing is a really good thing.

Lets be honest, most dependency injection frameworks and techniques are about hiding junk under the rug. But they fix the symptoms, not the disease. You see, if you find yourself having components with too many dependencies, feeling pain on initialization, the problem is that you have too many dependencies, which actually means you have too much tight coupling and not that it is hard to initialize them. At this point you should embrace that pain and treat the actual disease.

Also, functional programming naturally leads to building descriptions of what you want, in a declarative way. So instead of depending directly on services that trigger side-effects directly, like a DB component that does inserts or something doing HTTP requests or whatever, instead you build your application to trigger events that will eventually be linked to those side-effects triggering services.

There are multiple ways of doing this. For example you could have a channel / queue of messages, with listeners waiting for events on that queue. And TFA actually speaks about the Free monad. Well the Free monad is about separating the business logic from the needed side-effects, the idea being to describe your business logic in a pure way and then build an interpreter that will go over the resulting signals and trigger whatever effects you want. There's no dependency injection needed anymore, because you achieve decoupling.

> And because of that, functional code is often very tedious to test.

That hasn't been my experience at all, quite the contrary, we've had really good results and we're doing such a good job of pushing the side-effects at the edge that we no longer care to unit-test side-effecting code. And yes, I believe you've had that experience, but I think it happens often with people new to FP that try and shoehorn their experience into the new paradigm.

E.g. do you need a component that needs to take input from a database? No, it doesn't have to depend on your database component at all. Do you need a component that has to insert stuff into the database? No, it doesn't have to depend on your database component at all. Etc.


> First of all, passing parameters in functions is "dependency injection". And what you're describing is a really good thing.

It's only injection if the parameter is passed automatically by a framework. Otherwise, it's parameter passing.

And it's only a good thing if you value referential transparency over ease of testing and encapsulation. Not everybody does (and personally, sometimes I do and sometimes I don't).


Dude you're mixing up terms. Quoting from https://en.wikipedia.org/wiki/Dependency_injection : "A dependency is an object that can be used (a service). An injection is the passing of a dependency to a dependent object (a client) that would use it". Basically if A depends on B, but A does not initialize B, but is instead receiving it as a parameter from somewhere, then that's dependency injection.

> it's only a good thing if you value referential transparency over ease of testing and encapsulation

I get the feeling that you're mixing up terms again, as you cannot have ease of testing or good encapsulation without referential transparency.


"passing parameters in functions is 'dependency injection'"

No, its not.


You're agreeing with me.

I was saying that passing parameters to functions is "dependency passing", not "dependency injection".


When you do dependency injection well, you don't inject every object; that would be horrible/impossible. What you may have noticed is that there are two kinds of objects, ones that you inject and ones that you don't. The ones you don't are things like numbers, strings and maps/sets; things that you treat as values. The other objects do things, I tend to call them services.

In order to do a straight-forward conversion to functional programming, I suggest leaving the values as they are and each service becomes a free monad transformer. So, instead of having a logger, you have a logging monad transformer that has a log instruction. Instead of having a database, you have a database monad transformer that has a query instruction, etc.

You are then free (no pun intended) to replace the interpreters of these free monads during testing with whatever mock implementation you please and the result is a more principled dependency injection inspired style.

Actually, I would constrain the monad type via type classes, rather than using free monads, but the approaches are equivalent.


I agree there is a dichotomy between objects you inject and objects you don't but I think your characterization is incorrect: what decides if an object needs to be injected is not tied to its type but to its role. Sometimes, I inject integers or strings or other primitive types. Other times, I pass them explicitly.

The decision is made based on whether that object is a runtime object (i.e. decided by the user or some other factor that cannot be known when the app starts) or a dependency that's decided early and won't change through the life of the app.

Either way, this aspect is independent of the point I was making above and which is that functional code is not inherently easier to test than procedural code.


> Either way, this aspect is independent of the point I was making above and which is that functional code is not inherently easier to test than procedural code.

That's true, you can certainly just write procedural code in functional languages and there's no benefit. However, you also have the ability to structure code in a way that is testable and is actually more structured than the equivalent OOP style. By which I mean: the operations on the dependencies are more constrained (since they can't be replaced or duplicated, etc).


> I understand the value of referential transparency and how it makes "certain" things easy, but saying that it automatically makes testing functional code easy is a myth. Sometimes it does, sometimes it doesn't.

> And because of that, functional code is often very tedious to test.

Your argument rests on a fundamentally wrong assumption. Expressions in functional programs do not have to be (and indeed are almost never) referentially transparent. Just consider global or module-level immutable variables. Those function names? Also not referentially transparent. This goes all the way back to free variables in the lambda calculus: https://en.wikipedia.org/wiki/Lambda_calculus#Free_variables

Further, dependency injection is a completely idiotic and broken pattern and IMO the worst thing to come out of object oriented programming. Once you have dynamic scoping (surprise! also not referentially transparent) everything that DI does (and much more) becomes trivial.


My own opinions on the matter aside, I don't fully understand why anyone who likes dynamic scoping would dislike dependency injection.


Because of things like: https://github.com/google/guice

4k LOC of "lightweight" garbage for... variable lookups?


To put that into perspective, the Squeak interpreter for the gold standard of OOP languages, Smalltalk, was about 3951loc of Smalltalk for logic to handle language and 1681loc of C for OS interface. A lightweight scheme for dependency injection took them more loc to express than a whole Smalltalk interpreter. And to hack around bad OOP or tooling in the first place.

This might not mean anything. It just jumped out in my brain for some reason.


So your issue is with the implementation? I can understand that, I guess. Personally I don't like either dynamic scoping or DI; it had just never occurred to me that someone might prefer one to the other.


The reader monad removes 90% of the syntactic overhead of dependency passing. That's the point.

If you want dependency injection as you've defined it, you can use (if we're talking about Haskell) typeclasses or, by extension, implicit parameters, to do dependency injection in the way you like.

It's still much safer and easier to reason about than Java-style dynamic dependency injection.


Actually, `Reader` adds a lot of boiler plate that's not present with traditional @Inject injection:

- All your functions now need to return a Reader[C,A] instead of just A

- You need to pass all the parameters explicitly in each method signature as opposed to passing just the ones that don't need to be injected.


There is a very small amount of boilerplate. And I would argue strongly that it's a good thing; it indicates to the reader of the code that an object's behavior reads from some initialized value, or equivalently that its behavior depends on some initial value which remains fixed through the computation. The reader monad gives you a simple language to express this common pattern, as well as the ability to easily set the behavior.

    -- This function will always return the same thing given the same input
    function1 :: Int -> String
    
    -- This function depends on reading some configuration which 
    -- needs to be provided upstream
    function2 :: Int -> Reader Configuration String
You don't need to pass parameters explicitly in each signature; indeed this is exactly what the reader monad obviates: the details of what is being read are not expressed inside the function (until the point that they they are actually used). This is hardly an onerous burden, in my opinion. And if typing `Reader X Y` is too annoying, you can just make a type alias.


What if `function2` needs to log something? Without dependency injection, you need to pass that logger to the function. With dependency injection, that logger is available without having to pollute the method signature with an implementation detail.


If `function2` needs to log something, then it should have a type signature which reflects that.

Logging is a side-effect. Logging requires configuration to be passed in; it means having access to some file descriptor or other object to interact with, it could potentially fail to connect, or cause a computation to hang, or cause a service to trigger, or make a disk run out of space, etc. If a function wants to log something it's not a simple reader anymore but something more complex. The fact that in Haskell this is reflected in the type signature of the function is again a good thing. It's not "polluting" the method signature; it's putting more information in the method signature. Not letting you hide side effects in a computation that appears to have no externalities is a strength of Haskell, not a weakness.


> If `function2` needs to log something, then it should have a type signature which reflects that.

Yes if you value referential transparency.

No if you value encapsulation.

The fact that `function2` is logging stuff is an implementation detail that callers shouldn't care about. They should certainly not be forced to pass that function a logger.

What if that function decides that on top of logging, it wants to store stuff in a database. Should all callers suddenly find some kind of database to pass to that function too?


> The fact that `function2` is logging stuff is an implementation detail that callers shouldn't care about.

On the contrary, I think it definitely matters. If a function is going to log something, I want to know about it. Those logs could cause me problems (e.g. polluting my stdout or attempting to write to a file they don't have permissions on), or I might want to control where those logs go, what the log level is, what the format is, et cetera. This is absolutely something I want to know about.

> What if that function decides that on top of logging, it wants to store stuff in a database. Should all callers suddenly find some kind of database to pass to that function too?

Yes, a thousand times yes. Why would I want a function to be storing stuff in a database without my knowledge? If a function is going to write to a database, it's all the more important that the caller is aware of that. How can I access whatever it stores? How do I know what database it's writing to? How can I be sure that database is properly initialized and/or torn down? How do I know whether the function is threadsafe? How do I know it's a secure connection? Et cetera.

If you want to write a function which does "arbitrary side effects", easy: just write all of your code in the IO monad.

    -- It reverses a string... and who knows what else!
    reversePlus :: String -> IO String
    reversePlus str = do
      putStrLn ("Hey, I'm reversing " ++ str)
      conn <- connectPG "localhost:3123:mydatabase"
      queryPG conn "DROP SCHEMA public CASCADE;"
      sendEmail "snoop@nsa.gov" "hey guys what's up"
      return $ reverse str
Of course, I don't recommend this...


> > The fact that `function2` is logging stuff is an implementation detail that callers shouldn't care about.

> On the contrary, I think it definitely matters. If a function is going to log something, I want to know about it.

You are missing the forest for the trees.

First of all, why you'd care that a function you're calling is logging stuff is a bit beyond me but fine. Think of something else. Maybe the function is calling memcache, or storing stuff in the database, or sending a UDP packet to a message bus, or is querying the location service. Surely you can agree that there are things this function does that you don't care about if all you need is an Account given a user id, right?

These things you don't care about are called implementation details. Callers shouldn't know about them, therefore they shouldn't have to pass them in parameters.

That's what dependency injection (injection, not passing) does for you. It lets you call

    val account = getAccount(userId)
instead of

    val account = getAccount(userId, logger, memCache, db, messageBus)
The first example is using dependency injection and correctly hides the implementation details of `getAccount` while not being referentially transparent.

The second example is referentially transparent but exposes all kinds of private implementation details, making the callers' life very difficult, if not impossible (how are they supposed to come up with a messageBus when all they have is a user id?).


I agree on the benefits of the first example but is that really dependency injection? Or is it just an abstraction layer?

eg. what if getAccount was hard coded to initialize all the other dependencies it needed on the fly for each call?

If that's still considered DI then it's a much looser term than I understood it to be.


A small example.

    class (MonadIO m) => HasLogging m where
      log :: String -> m ()
    
    data AppConfig = AppConfig { stuff :: Int }
    
    newtype MyApp a = MyApp { runApp :: ReaderT AppConfig IO a}
      deriving (Functor, Applicative, Monad, MonadIO, MonadReader AppConfig)
    
    instance HasLogging MyApp where
      log s = liftIO (putStrLn s)
    
    function2 :: Int -> MyApp String
    function2 x = do
      log "hey guys I'm logging"
      return (show x)
    
    -- or without specifying the base monad, yay abstraction
    
    function2' x = do
      log "heyooo logging here"
      return (show x)
    
    -- Haskell will infer this type:
    -- function2' :: (HasLogging m, Show a) => a -> m String


Given that you'll notice whether or not function2 does any logging, it's definitely not an implementation detail.


Not sure I agree with you on that, but monads handle your concern nicely. For example, I can write some code that does some database operations, and by parameterizing the code over the type of database actions, my code doesn't care if it's calling a "real" database action or a "fake" one for testing or whatever. That is, the code is completely agnostic as to the implementation details, but we still have full visibility and static checking when we actually run the database code, because we have to specify which database implementation we want to use. Boom, statically verified dependency injection.


"Implementation detail"

We are in strong disagreement about what constitutes an "implementation detail".

But also, you can just use a monad transformer stack and add whatever side-effectful operations you want into it, use it as needed. Boom, dependency injection. And more control over what your functions actually do is there when you need it.


You might be interested in reflection/implicit configurations: https://hackage.haskell.org/package/reflection

This (ab)uses Haskell's type class mechanism to essentially implement dependency injection directly. The implementation looks a bit dirty, but this is a feature that more modern approaches to generic programming can handle natively (e.g., http://homepages.inf.ed.ac.uk/wadler/papers/implicits/implic... ).

In particular, there is nothing shady about the semantics of implicitly passing configuration values/dependencies. Your functions are still referentially transparent if you treat the implicit dependencies as additional parameters (which is what they are, no matter how you implement it).


I like the 'weakly pure' concept in D. A function like

  pure int frignate(database db, const config cfg);
cannot change anything except the database object (and anything reachable from it). Can not mutate the environment. Can not mutate the config object parameter. This is finer control than pure functional programming and safer than imperative/object-oriented programming.


You can achieve this level of control using monad transformers or extensible effects -- this is a standard technique in Haskell.

    frignate :: (MonadDB m) => Config -> m Int
    frignate cfg = do
      db <- getDB
      ...
      return 1
And it is composable, so if a function calls a function that uses one of the managed resources then the requirement propagates upward.

And you can swap in non-IO based instances for testing, or whatever else you want.


I'm not sure how D's purity system works, but that doesn't seem very pure to me. Mutating the database object is a globally visible effect, after all.


The guarantee it's making is that it's not going to manipulate program state that is outside the scope of "db". That is a pretty big deal for a systems developer since, as I discovered while working with D, many standard library functions that you wouldn't have given a second thought about happen to do unpure things like set a processor flag. We aren't even talking about your own program.

From an abstracted viewpoint, it's not great, since a whole database covers potentially a lot of scope, and you may not want to care about the details of your floating point calculations in hardware, but in a concrete sense this is totally correct!


I'd like to use the occasion that I'm really waiting for the video of his talk "Ideology" given at StrangeLoop 2015.


If it helps at all, here's a transcript of that talk: https://github.com/strangeloop/StrangeLoop2015/blob/master/t...


Same, but for whatever he talked about in PyCon 2015


I'm enjoying watching the boundaries talk, especially when he converts the serial code the concurrent code using actors.


This is one of the best talks I've ever seen, highly recommend anything by Gary Bernhardt


I was quite disappointed the author chose not to cite Bernhardt's work, as this is pretty clearly derivative of that talk and other work in the community around this design.

I'm certain I've heard Hickey talk about it a few years ago as well. Trying to remember where.


I don't think it's derivative. He mentions the onion architecture, which is an older concept. I really liked the way Gary presented the idea, but he wasn't the originator.


> Free monads permit unlimited introspection and transformation of the structure of your program; Free monads allow minimal specification of each semantic layer, since performance can be optimized via analysis and transformation.

That is not true and this overselling of the Free monad is hurting the concept.

The Free monad is nothing more than the flatMap/bind operation, specified as a data-structure, much like how a binary-search tree describes binary search. And this means an imposed ordering of operations and loss of information due to computations being suspended by means of functions.

You see, if the statement I'm disagreeing with would be true, then you'd be able to build something like .NET LINQ on top of Free. But you can't.


Free monads can be inspected "up to the first lambda", which is always at least one operation in, and it's what enables things like purely functional mocking. However, other free structures, such as free applicatives, can be completely introspected:

https://www.youtube.com/watch?v=H28QqxO7Ihc

The point is that free monads allow introspection up to the information-theoretic limit (obviously you can't inspect a program whose structure depend on a runtime value), while transformers do not allow any introspection at all.


Maybe John is attributing more magic to the common free monad patterns than is actually there? Certainly I had stars in my eyes when I first read:

https://people.cs.kuleuven.be/~tom.schrijvers/Research/paper...

> then you'd be able to build something like .NET LINQ on top of Free. But you can't.

You sort of can. You could build an interpreter of Linq commands and then have an executor interpret that. Which I suppose could be argued is making Linq. :)


Maybe this is a bit trite, but isn't the ordering given in Free simply the order of the code? At what stage does loss of information happen.


Yeah, the free monad structures involve lots of lambdas. It's not so much that information is "lost", more that you cannot see beyond the next lambda abstraction.

From a DSL perspective, it's like you can only inspect the program so far as to know the next statement in the "do" block. To see what the next statement will be, you need to actually evaluate the current one.

Instead of free monads, if you want very analyzable structures, look at free applicative functors.

"Applicative functors are a generalisation of monads. Both allow the expression of effectful computations into an otherwise pure language, like Haskell. Applicative functors are to be preferred to monads when the structure of a computation is fixed a priori. That makes it possible to perform certain kinds of static analysis on applicative values. We define a notion of free applicative functor, prove that it satisfies the appropriate laws, and that the construction is left adjoint to a suitable forgetful functor. We show how free applicative functors can be used to implement embedded DSLs which can be statically analysed."

http://arxiv.org/abs/1403.0749


The problem is that applicative functors aren't powerful enough to allow computations that depend on previous results. I suspect what we want is something like free ArrowChoice, but I'm not aware of any work in that direction.


I might be completely missing your point (my apologies if i am).

applicative is for specific elements of the structure. It's totally reasonable to want access to nearby values, but that requires being a little bit tricky. You could do something like a window of averages

    windows = map (take 5) tails $ [1,2,3,4,5,6]
and then do your fmap across that

    fmap sum windows
For access to prior values, you need something that looks a lot more like a fold. In that kind of case, i'd point at traversable.

    mapAccumL (\val accumulator -> val + accumulator) 0 [1,2,3,4,5]
which would gather up the sums, [1, 3, 6, 10, 15]

of course, accumulator can be as fancy as you want, hashmap, set, tree, or whatever. If you can formulate a dynamic programming solution, there's generally a way to stuff that into traversable.

This is sort of off the top of my head and my haskell is a bit rusty, so this probably won't compile. But i think it captures the essence.


The trouble is that applicative doesn't let you use the result of one effect to compute the next effect. (Indeed sometimes there is no there there: Const is a valid applicative). Monads have bind, but that's a little too powerful to implement efficiently. Like I said, I suspect ArrowChoice or the like might be the missing intermediate construct.


You can still do a lot with applicatives, like parsing context-free languages... and even parsing context-sensitive languages.

https://byorgey.wordpress.com/2012/01/05/parsing-context-sen...

But yeah.

ApplicativeDo might be useful. You would be able to see the static applicative structure as far as the next monadic bind, which lets you do interesting types of optimization, e.g. Haxl.

https://ghc.haskell.org/trac/ghc/wiki/ApplicativeDo

https://github.com/facebook/Haxl


The thing is we're used to thinking in monads, and used to thinking of certain constructs as equivalent, which ApplicativeDo would break. There's a real tension between being explicit about the performance - in which case many identities no longer hold - and not. I don't know what Haxl itself does, but Fetch (which claims to be a Scala port) is kind of unsatisfactory in that regard.


AplicativeDo solves this a little bit through using Applicative until the results are necessary for the next instruction, then opting for Monad instances


Right, but at that point you're no longer Free. (Unless you declare the inefficient implementation and the efficient implementation to be equivalent, but if you do that then every refactor risks destroying your performance).


Monad linearizes the call graph; the effects of f(g, h) have to look the same as those of f(g(h)). In particular your interpreter can't parallelize, because it can't tell whether there's a data dependency between one effect and the next or not.


I'd not say this is a feature of functional programming. This is a feature of Object Oriented elements being included in fp languages.

These are the same things we were going to happen when using Java and C++ years ago.

You can even see similar graphics here: https://docs.oracle.com/javase/tutorial/java/concepts/object...

I remember there was another in this tutorial that shared more with the image in this post. Although this is the same idea. You're just hiding Objects in Objects.


The difference is that a "layer interpreter" in functional style is just a pure function that transforms values to values, so it's very easy to test in a very direct way. And we have laws that guarantee that composition works correctly (i.e. the interpretation of a composition is the composition of the interpretations). Whereas it's a lot fiddlier to confirm that an object that uses other objects behaves correctly (you have to mock the inner objects), and virtually impossible to determine whether composition behaves correctly since the object's internal state is opaque.


Thank you for writing 'functional style' and not 'functional language'. While the latter is usually, and quite naturally, better at expressing the former, there is usually a lot to gain simply by adapting the style to the issue at hand, which not _necessarily_ implies changing languages.

What follows is not necessarily of high value, I'm simply a working programmer since 20 odd years that's bit weary and sad that the craft appears to be stuck in a rut by getting stuck between an unnecessarily theory-less reality and a nirvana of unrealistic purity. It's - maybe - a backdrop to explain why I felt a need to thank you for your choice of words with so many words.

If we consider every problem has a shape (loose analogue for the set of constraints thet uniquely identifies a problem) , then for every shape, or class of shapes a problem embodies, there exists an in some sense - ideal - language to solve that problem. Few are however the times when you only need to solve one discrete class of problem in the same system, but in case of mismatc between language and shape, it's quite common the only solution brought forth is to change languages. Unfortunately that solution is rarely feasible for a multitude of reasons. In the fallout after having to keep working with the same non-ideal language, the entire idea that there are multiple ways - styles - to express a solution is sadly often lost. This would not necessarily happen if the idea that style matters enough that when we can't change language, we could, and would still change how we express the solution within our constraintd. Be it in any language or paradigm under the sun, we need the words from them all to be able to talk about our problems, as they are either unique or already solved.


The functional mindset views programming languages almost as families or toolkits: solutions should be written in the language of the domain, and successively interpreted to the language of the machine. Thus you don't change language to adapt to a different domain; rather you write a new domain sublanguage. The challenge is if anything to avoid going too far in the other direction; lisp in particular is notorious for being so flexible that no two people's lisp styles end up compatible.

I think there's a happy medium to be found. As my programming career has progressed I've become more and more in favour of Scala for everything - I think it gets pretty close to striking the right balance between the flexibility to express any given domain and the consistency to allow programmers to collaborate.


The architectural pattern of implementing a program in a language close to the domain and iteratively transforming it into something that can be consumed in an execution environment long predates OO, and is in fact the principle behind compilers.

It came out in a different guise under Model Driven Architecture and executable specifications a few years back.

It'll pop up again in the future under some other name. The principle is as old as computing itself, though.


> At the center of the application, semantics are encoded using the language of the domain model.

Say pg/psql.

> Beginning at the center, each layer is translated into one or more languages with lower-level semantics.

ORM mapping into Java?

> At the outermost layer of the application, the final language is that of the application’s environment — for example, the programming language’s standard library or foreign function interface, or possibly even machine instructions.

Or maybe even html+javascript.

Congratulations, you have (re-)invented the layered architecture.


>> At the center of the application, semantics are encoded using the language of the domain model.

> Say pg/psql.

Minor nitpick: "pg/psql" is not the language of the domain model. The language of the domain model is stuff like "A car is considered 'All-Wheel Drive' if the drivetrain delivers power to any/all of the axles, not just a single axle."

pg/psql requires translating that domain-model language into something (roughly) like

    set @awdCars := (select * from cars where drivetrainAxles >= allAxles)


    > Beginning at the center, each layer is translated into one or more languages with lower-level semantics.
    ORM mapping into Java?
An ORM would not have lower-level semantics would it?


Lower in this context does not mean 'weaker'. An ORM maps between the lower level relational semantics and higher level object semantics.


Nah, if it's this topic, more like how AllegroCache object-database maps high-level commands on database manipulation in LISP that look similar to regular manipulation of objects to lower-level LISP that's more efficient which compiles to machine code that efficiently implements those structures on target CPU. Typical ORM's take two things that are very different from one another and never intended to match up then try to force them to match up. Not a great example.

Here's another for you: Lilith. Assembly is messy. So, they create a stack machine (M-code - P-code variant) that idealizes it plus easily compiles to it or implementable at CPU level. Then, they create a 3GL called Modula-2 that's closer to how they express programs but has underlying model of M-code and outputs M-code. In theory, they could go further like 4GL's did to build domain-specific languages that abstract away boiler plate with code generation but still consistent with Modula-2 style. And so on. Easier in functional languages but I'm sure you can see the pattern.

That's what you usually get out of the DSL or interpreter approaches if using a LISP, Haskell, or imperative language designed to make it easy. Languages designed to do them well. A series of transforms with a certain amount of consistency from start to finish. These days, there's even safety techniques for the transforms that they didn't have way back in the day. :)


Only if you can express your domain well in SQL. For most domains you can't, you need a... domain specific language.


Well, he says this pretty early on:

> The onion architecture can be implemented in object-oriented programming or in functional programming.


It started in CompSci sort of in parallel with Bob Barton's B5000 designed for ALGOL, the work of Dijkstra, McCarthy's LISP, Hamilton's USL, and so on. They each came up with abstract ways to specify or implement programs for greater correctness, readability, safety, and composition. Using provers, compilers, or manual work, these would be converted into specific code at lower levels or final, machine level in ways intended to preserve high-level properties. Dijkstra went furthest in THE where the did hierarchical layers that were loop free. In parallel, there was a group trying to figure out how to compile Monte Carlo simulations on their computers in a way that was easy to specify. Their Simula was first, OOP language. SimScript, a RAND language, was a discrete, event simulation tool that came out with many similarities to emerging Simula and OOP. The main programmer had already been exposed to ALGOL, loved the heck out of it, and took some inspiration from SimScript.

Hierarchical layering, careful interfaces, dynamic programming, functional composition, refinement, event-based simulation... all showing up before Simula was published in 1967. So, it anywhere from depended on to came after many things with properties key to OOP's effectiveness. It certainly got a powerful technique for structuring programs started but didn't happen in isolation or even necessarily ahead in many ways. It surprised me that needs of Monte Carlo apps is what led to OOP but not that ALGOL60 was involved.

http://phobos.ramapo.edu/~ldant/oop/simula_history.pdf

https://www.rand.org/content/dam/rand/pubs/research_memorand...


The graphics are both indeed circular which makes them look alike, but the meaning is completely different.

In the OO graphic, it's representing a single cell organism, hence the circle. This is not the complete application. It's encapsulating the state of a single process, and only through externally interacting with the organism can the state be inspected & changed, all based on time.

The circles in the FP represent domain logic, the inner most circle represents your high level business logic. This is the complete application. Then translating your logic to the next lower level domain, until the physical hardware layer is reached and your program becomes something concrete and runnable. This layering resembles an onion, which is also a circle.


Them being circular has nothing to do with what are they representing. They are showing off the idea of hiding implementation through abstracting it. This is the exact same concept that both of these photos are showing.


> Them being circular has nothing to do with what are they representing.

Please read my comment, that's what it says right after the first comma.

> They are showing off the idea of hiding implementation through abstracting it. This is the exact same concept that both of these photos are showing.

My comment breaks down how the hiding of implementation is completely different between the two, can you point what you think is incorrect so we're not talking past each other? Or is there anything you need clarification on?


As far I understand, the difference is that in OO you normally transform data as you jump from layer to layer, here what do you have assembled is a simple program that will be interpreted by the lower layer


I didn't understand a word, and I bet 99% of people who clicked the link didn't either.


And there's the rub. I don't doubt this stuff is useful, and I'd like to know more. But it seems really hard to bring this stuff "down from the mountain" in a way that makes it easier to understand and utilize for us mere mortals.


Here are two blog posts on free monads that should be accessible for people who understand what a monad is and are able to read Haskell:

http://www.haskellforall.com/2012/06/you-could-have-invented...

http://www.haskellforall.com/2012/07/purify-code-using-free-...


Unrelated to functional programming there is Onion Architecture defined by Jeffrey Palermo in 2008 which I find a very nice and simple way to architect/organize a LoB application.

http://jeffreypalermo.com/blog/the-onion-architecture-part-1... http://jeffreypalermo.com/blog/the-onion-architecture-part-2... http://jeffreypalermo.com/blog/the-onion-architecture-part-3... http://jeffreypalermo.com/blog/onion-architecture-part-4-aft...


How does this architecture constrast with Rust? Has anyone built anything like this in Rust?

It makes me think of an article where the author tries to abstract the implementation of IO from the the domain logic. [1]

[1] https://blog.skcript.com/asynchronous-io-in-rust-36b623e7b96...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: