It is a common pattern in many "enterprisey" java applications I've been part of and was called "service layer" (coarse grained, fine grained etc). This smells like old wine in a new bottle.
EDIT: I have to give credit to the author to bring this in to the Rails apps, he's right that Rails encourages to think in terms of controllers,models and helpers.
Well, the use case for wine today is much the same as it was in ancient Rome.
Someone needs to curate all of the pitfalls of rapid dynamic development environments into one book. This would be of tremendous benefit to developers farsighted enough to learn from history.
I find that web frameworks focus so much on CRUD and not enough on business logic. We usually end up with either fat models or fat controllers, neither of which is ideal or practical. Symfony2 (php), however, offers something they call Services, which I've been using to fulfill the role of the OP's Interactors and so far I'm pleased.
I prefer fat models to fat controllers because methods in the controller are much more difficult to reuse than in the model.
When models get too fat, I usually create modules that break up the bulk of the model into smaller chunks. Meaning, for example, if it has lots of validations, then maybe I'll move those validations over to a module and then extend/include the module back into the model. I haven't needed to use these modules in any other class, but less code in one place is usually easier to read than more code. If the language you're using isn't Ruby, then a module is basically a means of doing multi-inheritance.
I sometimes find the contrary though, that some developers can be very reluctant to acknowledge that a particular problem calls for a CRUD app, and they instead spend countless hours 'domain modelling' when there's really no problem domain beyond 'some text we can edit'.
I second this. Symfony2's use of Services is refreshing, and I feel that having been made aware of this practice, I now write better software. While reading the article, I kept thinking "this sounds just like a Service class."
What is preventing you from creating a layer of "services" or whatever-you-want-to-call-it in between the controllers and the models in those web frameworks?
I don't think there's anything preventing you from using Services in other (PHP) frameworks, it's just that Symfony2's Dependency Injection Container component makes it very simple and straight-forward. I think it'd be trivial for other PHP 5.3 compatible frameworks to adopt this component to facilitate using Services themselves.
Perl's Catalyst Web Framework encourage you to do it slightly different:
you develop libraries for your business logic and try to make these reusable (or may be even opensourceable on CPAN for other developers to use). These libraries will include their own test, which obviously not require you to fire your main web app and need to be tested once - when installing it on local machine.
What do not fit well into standalone libraries, goes into model (aka fat models). Again, models developed as standalone piece which could be used from command oneliners, web frontend, anything else. Tests for these often are part of web application, but testing of models should not require firing web app itself.
and last piece - controllers and views. these are integral part of web app therefore these require firing web app code to test. but controller code is very thin since it just prepare and route calls into model. views are usually just an adapter, often default scaffolded, for existing templating module on CPAN.
I bet same principle can be applied to Rails framework, but it is about community around framework which encourage you to do things in a certain way.
So you take your service layer, decompose it into a half-arsed version of the command pattern, and invent an entirely new term for it, just for value. Got it.
Thanks for the links, btw, both here and in the post. Watching the video now. It's interesting because when I first started working in functional languages like Clojure I struggled very much with how exactly one went about modelling an app, as opposed to the OO way. When I realised that the place to start was the service layer (and indeed individual use cases), things became much clearer.
Yeah, I've met people calling them 'Tasks' too. If it _was_ a command object - if you were auditing logins, for example, and wanted to store the login command in your history - I'd see the need for its existence and its instantiation.
As it is, it's just the sort of thing OO programmers seem to create because they find nouns comforting.
And lordy, so many nouns. Looking at the whole pattern, with DTOs in and out of Interactors, Presenters to format the DTO for the view... it never ceases to amaze me the amount of work people will do to supposedly save themselves some work. The amount of architecture people will put in place to 'put off decisions about architecture'.
Aren't we talking the "command pattern" here? There might not be an undo function but isn't encapsulating discrete business operations its responsibility? Maybe more fine grained than a use case, but the sample code has a striking resemblance to the command pattern.
Although I question demonstrating the principle with a login given that those tokens are usually application specific and little to do with business rules.
Heh I wish, but no, it's pretty rare to see this at least in the Rails world.
As for the login token, I feel you may be defining "business rules" differently than I am. I'm using "business rules" as "the rules in which the application functions", and one of the rules is that you need an authenticated user to do things. The term is an effort to use a word or phrase that intentionally leaves out the framework you're using, but yes it can get confusing depending on what definition you're use to using. The other phrase, "use case" can at times be too limiting, but I don't know of any better term for what I'm talking about.
> The fundamental problem with almost every Rails project (and I’m sure in other frameworks as well), is that there is no direct codifying of the business rules and use cases of the application.
This is a damn shame. Why do I say so? Because the exact same thing happened years ago in Smalltalk projects. It's easy to start and iterate quickly in a dynamic OO language and the technical debt sneaks up on you very quietly. The next thing you know, there's a quagmire of convoluted and deeply nested conditional logic.
If it was your business to curate lots of painted canvases, then you'd have an orderly and logical filing (or search) system for keeping track of those. If it's your business to curate lots of neat functionality embodied as pieces of code, then you need an orderly and logical system of keeping track of those. It's amazing how many large projects devolve into a big pile of stuff in a class for every screen.
"Enablement" or "security" or whatever you call the framework that lets particular users do particular things at particular times was one of the big wins for Smalltalk projects.
If someone asks you why user X can't do Y on screen Z, and the only way you can know this is for someone to just happen to remember or to painstakingly reverse engineer code, then you have a kind of technical debt.
Beware if your system just consists of putting in a breakpoint then reading code. There is no clear demarcation separating that from the quagmire. (The solution is to have some kind of consistent idiom/framework/something that people can learn to parse, and a way of reorganizing recurring ideas so they can be reused.)
Isn't this simply the cornerstone of functional testing?
At that level your tests should be executing business test cases. If a functional test fails there is either a corresponding unit test failure or the failure of the interaction between the methods rolled up into the business logic (functional test).
Functional testing is related, but it's not really the point of the article. In the context of the article, functional testing is actually one logical step above his Interactor—the functional test would end up effectively testing the interactor code (which includes all the model, etc code) plus the controller code to make sure the session is properly set (i.e. now that I'm logged in, was I properly redirected?). The latter is outside the context of the "business logic" of logging the user in.
The point of the article is in organizing the business logic of use cases into their own objects—"business logic" being defined, presumably, as anything non-web and non-persistence related. This is done in order to avoid the spaghetti that real world "thick model" applications tend to accrue; or similarly the spaghetti that some developers leave in controllers.
There have been a spate of articles on this topic in recent years. Service objects, interactors, DCI, etc, etc. One or more of them is probably good advice for large projects. :)
"LogUserIn" is not a class; classes are most often nouns and actions are typically verbs. A "User" class (or "UserController" class) with a "login" method is more appropriate.
You're actually wrong. This is a bastardisation of the command pattern. LogUserIn is a command. The verb/noun thing applies to a different class of object (your domain).
The article sucks pretty badly to be honest but the foundations are solid.
A much better solution is to use the command pattern over a rigid domain model. The rigid domain model encapsulates your business domain and logic and your commands describe different ways to mutate the domain.
This could be classified as CQS or if you use a bus to deliver your commands CQRS.
Unfortunately, and I'll probably get flamed for it, by rails is a stinking turd when it comes to architecture. I've never seen anything which isn't trivial CRUD done without causing uber spaghetti. Also to do proper domain modelling, you need a more powerful ORM such as sqlalchemy or hibernate.
I've never seen anything which isn't trivial CRUD done without causing uber spaghetti.
It's easy to get a lot of followers by creating a framework that lets a new user create something small very quickly. Keeping it well organized as time goes on is much harder. It has been this way for decades, and still people haven't learned.
And I'd like to clarify your point here. People haven't learned because there isn't enough teaching going on. It's only been the past 4 months that I've even begun to realize just how wrong I've been, and that's only because I found a mentor who pointed me in the right direction.
There's a huge problem of how to teach people good, responsible software development, and this article is a part of my attempt to help educate people towards even knowing about better software practices. People aren't going to just suddenly get it, they have to learn from somewhere.
There's plenty of teaching going on but people have stopped listening to people who know what they are talking about and have started listening to all the trendy marketing out there.
No, I'm pointing out things that were wrong back in the late 1990's and early 2000's are again wrong just for different languages. It's an opportunity, in other words.
We had formalised design patterns and architectures back then.
Yes, and many of them described in books in particular languages. Not every language community that could have benefitted had the right book written using their language.
Like I said, a recurring condition and an opportunity.
Why? That's how BCrypt, for one, works--you create a User#password method which returns a Password object (initialized from the contents of the User's actual #password_hash field or somesuch.) That Password class has an overridden equality comparison method, which hashes the RHS String to test against itself.
No, because you're giving opinionated advice with weak supporting evidence. So the reader is left with 2 things to decide whether it is good advice.
1) your reputation in the community. I for one have never heard of you, so that leaves
2) code quality of the examples that you posted. a reader could infer that this may be good advice based on the fact that you appear to generate good code.
In this case your code is clearly not very well thought out, so why would the reader put trust in your unsubstantiated opinions?
I'm sorry, at what point did I tell people to blindly do something? I'm offering ideas, suggestions, help to those who are and who've been in the same situations I keep finding myself in (utterly painful Rails code). It's unfortunate that you feel no-one should even attempt to contribute to the community unless they've already contributed to the community, and that you feel that every bit of code should immediately be complete and perfect.
In case you weren't aware, samples that don't require a ton of context are a particularly hard problem for informational posts like this. If you have feedback on the ideas I presented I'd be glad to hear them but I still fail to see how nit-picking an obviously early in-progress application proves anything.
At what point did I accuse you of telling people to blindly do something? I accused you of not having good support for the opinions in your article. That is all.
I, too, keep finding myself in utterly painful Rails code. Now, if I may use your metaphor: Most of it is due to the blind leading the blind, per se. Programming novices discover some new rails thing, ruby thing ("method_missing" is the worst offender), or even a new programming pattern such as Command, and think it's the most incredible thing they've ever experienced, so they write a blog post about it. Other programming novices read the blog post and then go off on a rampage, overusing that concept and writing all the shitty rails code that I am currently maintaining.
EDIT: I have to give credit to the author to bring this in to the Rails apps, he's right that Rails encourages to think in terms of controllers,models and helpers.