Let's try this advice. Say, we're solving Hanoi Towers [1]. So, we're going to have rods, and disks, and rods are going to have the number of disks on them, and disks are going to have size, and probably a reference to the disk underneath, and a reference to the disk on top, and maybe also weight and color... and by the end of the day instead of a 10-line snippet [2] we're going to have Enterprise Java Beans.
No, it doesn't mean that OOP is awful - it just means that this particular way of modeling a problem is a recipe for over-engineering.
Start with the simplest thing that works. That thing will probably be just a function. Grow it from there. If it gets too big, split it. If you find that you pass the same 12 parameters to a bunch of functions - factor out a class. If you do the same thing in a bunch of classes - abstract it out. Keep it DRY [3]. Keep it SOLID [4]. Rinse and repeat. This way you end up with a useful class hierarchy - and OOP won't be awful.
I like your philosophy, but I think this counterexample is misleading. The Hanoi puzzle is an old favorite, largely because it lends itself to a surprisingly short, recursive solution. If your goal is merely to churn out the steps, then an OO implementation is easily mocked as overkill, but that's because you went into the implementation with the solution already worked out. If your intention was instead to build a program with a UI for playing around with the pieces, moving discs around willy-nilly and perhaps adding more rods, then Rod and Disc classes present themselves quite naturally. In this more real-world setting I think an OO design would be quite elegant and flexible.
Yes, this. Most of these, OOP is bad articles follow the same pattern of showing how an idiomatic object based decomposition doesn't do well on algorithmic problems that developers rarely deal with. They then leverage red herring and slippery slope to show that OOP is bad in general. FP similarly falls down on stateful interactive systems in the same grain, where an object-like system emerges in the end in any case.
Who doesn't go into the implementation with the solution worked out?
I think that's not the problem. I think it's just that different things have different levels of complexity, and your representation should match what you want to do.
There is a lot of "real world" computation where it makes more sense to do things the "mathy way" rather than representing them with objects.
>>Who doesn't go into the implementation with the solution worked out?
OOP, as Allan Kay envisioned it, is about making developing software a process of learning about what you are modeling. You are expected to naturally find the solution to your problem as you model it, not to have a worked up a solution you have to put into the code. With real-world sized problem, this approach gives good results. But, obviously, it can fail to find some great solutions. As is the case for the Hanoi problem: you can model it very well without finding the optimal solution to it. But the model still has some added usefulness.
This is exactly right. Alan Kay said something to the effect that the reason he invented OOP is because he hated algorithms (and data structures). Indeed, one of his criticisms of Java-style OOP is that, by using setters, it "turns objects back into data structures".
I think the point being made is that we use OOP to provide a natural abstraction of the real world to the end user. In this Towers problem: it would be like building the Towers game for an end user to play with - not solve it for the user.
I think that's a little too black and white back the other way.
Don't get me wrong, I live in python, I create classes on the odd occasion I need them and not before it's definitely required.
That said, there's nothing wrong with sitting down and modelling the problem you're trying to solve on paper to try to understand what's involved. You don't need to implement it that way. Whatever you do, stop and take a moment to think it through.
It's not philosophically wrong to model the problem upfront, it's just very hard to get it right in practice.
You don't literally have to start with one function. If you've solved a similar problem before you may have a good idea of what the code structure is going to be (this is what design patterns are all about). Frameworks may force a structure on you. There are shortcuts. But even then, I've been bitten more then once by the problems that looked awfully similar to what I've done before only to turn out to be very different at the end.
It was personally strange to start programming in earnest with Python, and think namedtuples (or classes with __slots__) were so cool and efficient, and then learn C and just think "oh yeah. Structs."
What's really missing though is unions with destructuring `case`. Once you start coding in a language with proper unions, it's hard to go back. In python it might look something like this:
I think this is the right way to start learning OO. The difficulty with OO modelling though is that many of your classes are abstract ideas that don't have a real-world counterpart. How to group information into classes can be decidedly unintuitive. For this type of modelling it's better to group information into classes according to their lifecycle. Normally the only way you can really know this is to build the system and refactor as the lifecycles become more obvious.
On another note, the problem with the way OO, and really programming in general at the uni level, is taught is that it's always in the context of toy apps that aren't going to grow. This encourages an up front planning process and you can get away with it because all you have to do is make it work. You're only learning how to tell the computer what to do, not how to build a system that can adapt to a changing user base for instance. Perhaps it's necessary that you learn to just talk to a computer first and with experience learn to build growable systems. The problem though is that the upfront planning process is taught as _the_ way to build a system.
I agree that the functional paradigm works very well for a problem like Hanoi. In fact, it's certainly very good for any type of operational/algorithmic challenge.
However, the question isn't just alluding to functional questions or algorithms. It's talking about designing a solution that can be used to manage multiple types of operations in a well organized manner.
Think: I want to change the colors of the tower pieces. I want to analyze them based on their properties. I want to attach meta information to them as I sort them. Perhaps I want to keep track of where they go over the course of their journey to the tower. Maybe I'm a particularly indecisive architect that wants to build 5 types of towers.
As you may imagine, it's possible to do this well in FP and in OOP. But the most likely best solution is a healthy mixture of both, to handle the true nature of problems that have verbs AND nouns.
I like your idea of what matter is what the program must do, and thus we must think firstly what will be the functions.
I may be wrong, but once we have a 12 parameters function, it may be interesting to factor out to more than one class. If I do this factorisation as a no-brain, I will create a "FooParameter" class with 12 getter and setter. It will be better to stop and think of what will be the easier to read. Ie do some abstraction work:
Object oriented programming is awful, and this answer describes why. It shifts focus from algorithms to objects. As a result, you get these over-designed programs with lots of objects that have lots of methods, and the algorithm gets totally obscured.
Yegge's right––specifically, he's right about Java, whose object-orientation many longtime Smalltalkers would frankly dispute. And yet, nouns still exist.
If the problem domain you're trying to grapple with is dominated by algorithms, an idiom that prefers verbs to nouns may be helpful.
If you're working in a very complex business domain, where the algorithms aren't especially interesting but the relationships between entities are, nouns are helpful.
Clojure and Haskell programs often utilize named types. Several OO languages have recognize the need for first(ish)-class function management. Ruby, Python, and Javascript are all good examples of languages that have attempted some compromise between the two. They are justifiably popular.
Good OO code isn't as different from good FP code as most partisans imagine. Remember that all good code has to satisfy at least three audiences: the computer, fellow programmers, and third-party consumers (who may or may not be either of the above). They all have different needs. Don't discount the importance of nouns for that third category.
Much of OOP and FP is very similar. In fact, closures and objects are basically equivalent: each can be implemented with the other. So then the differences between OOP and FP are largely a matter of style, and arguments consist mostly of anecdotes.
But OOP has some concrete advantage over FP, and vice-versa. Specifically, OOP makes it easier to add variants. If, at some point during the course of programming, you decide you need a new variant, it's simple. You make a new subclass. On the other hand, FP makes it easier to add operations. You just write a function. So you wind up with a matrix:
This has been independently observed by several programming language researchers. Then the question becomes, how can we make both easier at the same time? This is the Expression Problem. The Expression Problem is solvable in several existing programming languages, but the solution is often ugly and convoluted. So I like to add an additional restriction: how can we make both easier without lots of ugly, confusing boilerplate? Things like multimethods and typeclasses get us part-way there, I think.
While I love FP, it's important to not write-off OOP. There are some problems which are more naturally solved by OOP. Good ideas are good, regardless of associated paradigm. Then, OOP should not be an object of disdain, but an object of inspiration: how can we take all of the good parts and remove all of the bad parts? How can we advance the art and power of programming?
If we want to see improvements, we have to keep an open mind.
After a recent discussion on this topic on the Haskell Reddit I posted that I've begun to the see the Expression Problem in terms of data and codata [1].
In very brief, data tends to be completely transparent in its construction, finite (or tending toward finiteness), possessing obvious structural equality, and can be easily transformed in myriad ways via induction.
Codata on the other hand comes from the other side. It is completely transparent in its destruction (observation, think object properties), tends to be infinite (think how the process of a state machine is infinite), possesses obvious entity identity, and can be easily created in myriad ways via coinduction.
In most programming settings, these are just perspectives as they can easily coincide, but you can observe the difference when you program in a strongly normalizing like Agda.
I feel that OOP has built most of its abstraction on the assumption that codata-oriented (object-oriented, kind of) modeling works well. For this reason, it's easy to talk about black-boxed domain models which are only predictably observed and classified, but it gets hairy to talk about equality of two trees---you're forced to think about pointer equality even when structural equality might be a more natural fit.
FP doesn't necessarily pick one side or the other, but it certainly makes data more accessible (algebraic data types being as terse as they can be) leading to the Alan Perlis "100 functions" quote.
Greatly powerful things can come about when the two sides are combined or emphasized for their strengths. In particular, the `pipes` ecosystem being built in Haskell right now straddles the line by being perhaps best modeled as a hybrid data/codata type. This means that you can both somewhat easily construct pipes inductively while also observing the infinite process (codata) they represent.
I don't think anything I'm saying is particularly new to researchers, but I personally have been finding it incredibly instrumental when talking about how OOP and FP differ.
Take the pedagogical variants/operations example: a word processor, where you have different kinds of items on the page (pictures, paragraphs, etc) and different operations (drawing, layout, etc).
In an FP language, if you want to add a new operation, say spell check, it's easy. Just write some functions operating on the variants. In OO, you have to break your algorithm down into methods and add a new method to each class.
Now, say you want to add a new variant, say embedded video. In an OO language this is easy, just add a new subclass and implement methods for layout, drawing, spell-checking, etc. However, in an FP language, it's still easy. Add the new variant, compile, then fix the places where the compiler complains about non-exhaustive pattern matches. I posit that fixing these errors takes no more work than it does to write the methods for the OO case.
The big benefit of the FP organization is that if you want to understand or rework the layout algorithm, all the logic is in one place. I think losing this benefit totally outweighs any benefit gained in making it marginally easier to add a variant.
Note that OO programs often use the visitor pattern, which is just a very awkward way of expressing a switch statement that checks for exhaustion.
It's not that adding new variants in FP is completely hard, just that it's harder than in OOP. When you add a subclass, all its operations are confined to one section of code, one module. In FP, you'd have to modify perhaps several different sections of code, possibly in separate modules. Likewise, it's not prohibitively difficult to add operations in OOP, it's just easier in FP.
In the word processor example, OOP makes it easy to see how say the image object participates in layout, rendering, printing, etc. However, it makes it much harder to see how the layout algorithm or the rendering algorithm or the printing algorithm works. FP has the opposite set of strengths and weaknesses. Easy to see how an operation works, hard to see all the ways a variant is used.
But its not realistic to treat both cases as being of equal weight. In practice, your work will be doing something like fixing the layout algorithm or optimizing the rendering algorithm. Being able to easily grok an operation is much more valuable than being able to easily grok the behaviors of a particular variant.
I agree and disagree. FP's strength is you can layout your code in whichever way makes most sense for your project: by what they operate on or by what they do.
Grouping by operation makes sense until a certain point, after which you have to subdivide. If I had an FP based word processor example with 3 operations but 60 items I'd probably group by items, not operations.
Imagine if FP forced every append-to-collection-like operation into a single source file. It would be madness.
I see what you're saying, and I agree. Just to be sure:
The most complicated parts of programs are typically the operations. They're also the most likely to need additions, changes, or optimizations. Thinking about it now, I think most programs are far less likely to need new variants than operations, which is something I hadn't considered.
I still contend there's value in trying to make them both easy (for those relatively rare times when they're needed).
I wouldn't say closures are what make functional programming functional. I tend to think like that too, but it's probably a bias from my Lisp/JavaScript background. Closures are orthogonal both to OOP and FP.
IMHO functional programming is just:
- Higher order functions
- Referential transparency
- Immutability
Closures are just a convenient way of encapsulating state avoiding global variables.
Similarly, I wouldn't say objects are what make object-oriented programming object-oriented. It's a different approach to organizing and structuring a program. Objects are just one part of that approach, just like closures are just one part of the functional approach. The fundamentals of the two approaches are more similar than people often make them out to be, even if the approaches appear rather different.
What are the fundamentals of OO? Those principles of immutability and referential transparency seem to require writing pretty unidiomatic code when working with OO languages.
Rather than saying the fundamentals are more similar than given credit for, perhaps I should say the fundamentals overlap more than given credit for.
I'm having a hard time finding a concrete example at the moment, but I've seen fragments in Java codebases that, if you squint a little, seem almost functional. Of course, the code was very, very ugly, but semantically was quite functional.
I'm not trying to defend OOP. My view is that the benefits of FP outweigh the benefits of OOP. If we could find a way to import the benefits of each into the other, we'd have the best of both worlds. (Of course, then everyone would find something new to argue about ;) ).
It isn't harder to add operations in languages that don't confuse class and object, e.g. if you have a dynamically typed language you could just add whatever operation you needed for a specific object without worrying about hierarchies. The question is whether that just replaces one problem (brittle hierarchies) with another (method-not-supported errors).
I failed to mention that the expression problem, as originally formulated by Wadler[1], is in the context of typed languages. One of the arguments in favor of untyped languages is that they are indeed more flexible that way, but at the expense of type safety. As the science of type systems advances, typed languages are slowly becoming more and more flexible (and, sadly, difficult to learn).
I should have read your original post more closely, hadn't even heard about the Expression Problem. Makes more sense now after a bit of reading - I guess Haskell's notions of static typing are quite different from that of the C family of languages.
There's cargo culting on both sides of the question.
Because someone heard pg say he's not overly fond of overly OO code (which he has said in the arc design document), or some other well-known thought leader on programming (like Yegge), they then form a particular dislike of all OO code and even the idea of OO, itself. It's not rational or very fully formed for most of the folks who hate OO. Likewise, there are folks coming from (maybe?) a Java or C++ background, who feel that strict OO is the only sane way to approach designing a program.
I'm not going to try to give examples of good or bad code in either category...but, will point out that Haskell is not an OO language (in the Grady Booch sense of OO, though you can encapsulate methods in data because it has first class functions), but there's an awful lot of beautiful and correct Haskell code out there. Likewise many other functional, Lisp, and ML-derived languages. OO design can obscure meaning, but there's no reason to get reactionary about it. It's one useful tool; it certainly works great for UIs and some types of API.
I'd like to try pushing the Haskell-is-an-OO-language bit a little further. I think Haskell98 is lacking, but with existential types you can model process entities in a way that I think is often underappreciated. As an off-the-cuff example, consider a counter. A simple approximation might be to say that (Int, Int -> Int) models a counter because it models the state of one, but it also fixes the implementation and fails to encapsulate.
But we can do better with existential types
data Counter =
forall x . Counter { _step :: x -> x
, _view :: x -> Int
, _state :: x
}
step :: Counter -> Counter
step c@(Counter {..}) = c { state = _step _state }
view :: Counter -> Int
view (Counter {..}) = _view _state
At this point, we're passing around the internal state as "some type x" which can never be inspected, achieving encapsulation. We can create general functions that manipulate Counters
We also now must start thinking of our "objects" as entities because we cannot compare their internal state for equality (unless we give it an Eq constraint). This begins to quickly demand we keep some notion of state as we need to model "pointer equality" pretty quickly.
We can even do HAS A subtyping. Lenses make this very powerful.
data TwoCounters =
forall x . TwoCounters { _c1 :: Lens' x Counter
, _c2 :: Lens' x Counter
, _state :: x
}
There's no doubt that this is an uncommon technique in Haskell, but it's worth stating that there are a LOT of similarities between codata modeling like this and typeclasses.
Finally, I began to follow a lot of this due to a middling popular library `folds` [1] which describes very generic ways of traversing data structures and accumulating some kind of result state. Basically all of the types dealt with in this library (L1, L1', M1, R1, L, L', M, R) are existentially quantified codata "objects" like I described above. This makes for very general code.
By the way, for those who aren't familiar with Lenses, it's not a bad thing to think of them as getter/setter pairs (which should be a little unsurprising at this point). They're first class, more principled, and more generalizable than your typical getter/setter pair... but my use of them here is far from unintentional.
I think LLVM is very readable considering the complexity of what it actually does. It uses C++, but for the most part it uses classes to encapsulate names or as dumb data structures. E.g. an analysis might be implemented as a class, but it'll just be one or two classes with a bunch of non-virtual methods operating on dumb data. The dumb data is encapsulated in objects, but the objects don't really have methods that "know" how to do anything other than manipulate the encapsulated data. All that intelligence is in functions that operate on the data.
It's very different than typical Java code, where an object will "know" how to do non trivial things to itself, and the pieces of an algorithm end up spread over a dozen classes in a dozen different files. Such code will often use elaborate class hierarchies breaking down pieces of an algorithm into objects that interact with virtual method calls, all because of a crippling fear of switch statements. This cuts off the nose to spite the face, rendering the code incomprehensible as a result.
I really don't see myself implementing my business logic with algorithms only ... OO makes my code verbose, I can read it and understand what goes on. Not every thing can be boiled down to an algorithm ...
I don't find that to be the case. As with anything used judiciously, OO approaches are just a tool, not a way of life. The same could be said of any approach where one applies the same patterns over and over without regard to the problem being solved.
I think this seems a little too existential. What is more important, that you classified the right nouns and verbs in your Paws and Dogs, or that your code works?
In my view OOP is all about sane interfaces between components. In your internal code however, you should feel free to let your implementation details and data structures drive the flow, and not con yourself into writing OO spaghetti, that type of code written by inexperienced types who "heard somewhere to use classes" and can only think in classes, and the abstraction serves to obfuscate rather than clarify.
After years of software design, I can say with a high level of confidence that classifying the nouns and verbs properly is a prerequisite to quality OO code (from both a functional AND longevity standpoint).
If you fail to do that up-front analysis, you may end up with functional code. But, the likelihood that it is readable, maintainable, extensible, etc is much lower.
This isn't to say that all good code is OO. Only that writing good OO code requires you to define the objects andt their interactions well.
Actually, only #1 and #2, "simple statements about what these objects will be doing", are relevant. The 'nouns' points are misleading at best. OO is all about behavior ('services'), not about data. A class doesn't encapsulate data, it encapsulates state. That's a big difference that many OO aficionados don't realize.
Another source of confusion is that in some OO languages 'everything' has to be a class. A Java class in many cases isn't a class in terms of OO. Java Beans e.g. certainly are not OO classes.
When I first used oop, or any object orientation, it was in the early days of the public internet. There wasn't nearly the resources available now, especially for a hobbyist. So when I learned the syntax (I believe it was early c++), I had nothing else to go on. I naturally leapt to using objects to describe and build new data types. It made me think about what data I needed, then how the data needed to be operated on. It wasn't until I started taking courses that I heard the noun approach... And it just seems completely backwards. It isn't about modeling your data, it's about the conceptual items you're working on. I still think oop is a good approach if you ignore all of the 'best practices' and instead use it to build custom data types that simplify your algorithms. It's part of what I really like about Erlang records (even though they're a bit annoying to use) and Scala in general. You can build custom data types without having to just compose lists and maps but you get a lot of the benefits from functional programming.
I really don't understand why so much of the programming classes, tutorials, guides and common wisdom aren't 'data first' style design.
It is utterly ill-advised. Software design is about abstraction, to make code modular, flexible, and extensible. It doesn't matter it is OOP or FP. Both approaches have their own advantages and limitations.
The dogmatic answer is misleading. It might be ok to introduce OOP to new programmers, to open a door for programmers who are only familiar with procedural programming. For any experienced programmers, if they don't believe in this dogma, I have to say this is not a right career for them.
This is almost word by word what I was taught about OOP in Introduction to Programming back in 89. We used Modula-2, but it was already easy to connect with the way we had been writing games in assembler. Later on, languages built for OOP like C++ or Java, with all the complexity and new features, shifted the focus of teaching programming from the problem domain (what I need my code to solve) to the language domain (what code I can write), and then Patterns arrived, and things quickly got hairy and over-engineered and big.
It's interesting to note that the question was "how to design a class", which to me sounds like a question about static structure, e.g. what data members and methods will the class have. Most of the discussion on this and the SO page mixes in OOD, OOP with the original question and then throws in some inappropriate OO vs. FP tangents.
It's important to remember that there's a lot of, for lack of a better word, "class-oriented design" which typically tries to model the static structure of the physical world or some artefacts of the programming environment.
In a completely separate world there's what I would call OO design and programming which is more concerned about the interaction of actual objects aka. instances of classes. Coming from the statically typed world the distinction is often unclear but for the original Smalltalkers and other OO purists object orientation was never about classes.
Care to cite one of those "knee-jerk" responses that you are talking about?
If someone is nagging here I assure it's not me. Just found curious that they weren't rigid as they normally are nowaydays to these question. If this one was an exception or they got more harsh over time is what I wondered.
This is the first reply to me that I recall here on Hacker News that someone over interprets something I said (in a bad way). I don't think so but anyway if I sounded like I was complaining I apologize, that is not the case (actually is the quite opposite, I love those more open questions).
Thanks :-) Your answers on some of those other questions really helped me get into this type of analysis. I'm even still working on my this dogs project (https://github.com/ivoflipse/Pawlabeling), the automatic labeling of the paws still needs a better solution!
1. First, write some use case scenarios/specs in order to get a better understanding of the problem (for libraries this is README.md, for user-facing programs its the user's problem and the solution's workflow)
2. Write the code the way you would ideally want it to work, top-down. Imagine that you're in fantasy land. Don't worry about it being wrong or impossible to do that way. That will be corrected later.
Use classes to express blueprints for tiny worker machines that work on data, not the data itself. Use classes to simulate entities.
A user is not the user's personal information - thats a data structure. A user class makes sense in a testing framework where you want to simulate the process of a user visiting a website.
A button is a worker machine that can detect a set of inputs and call attached functions. A list is a worker machine that can keep items and update the screen based on its scroll position. A stream is a worker that can be customized to process incomming data packets in a certain way, and its output can be connected to another stream.
Think of classes as tiny single-purpose computers or simulators.
3. Express details about the desired outputs of that top-down code as tests.
4. This is a good time to write all the data structures to represent your data. You can use relational modeling here even if you're not using a relational database (or any kind of database), but you can also use pointers/references instead of foreign keys.
Model the data structures for all data (including computed data), not just input or stored data.
You can also do this before step 2. but in that case you will be tempted to write your code to operate directly on the lowest level data structures, instead of making it look clean and simple.
5. Try to make the tests pass.
6. If its impossible to implement or you realize its not a good idea to implement it that way, tweak the test a bit then change your code and data structures as you get a better understanding of the problem.
7. Once you're satisfied with the test, you can stop.
8. If the future of the program requires you to adapt it, keep working like that over the existing code. The tests give you some reasonable assurance that it still solves the old variant of the problem, while new tests will ensure that the code sanely solves newer problems too.
The advice being given is more applicable to problems which have some real state inherent in the problem, e.g. network connections, UI state or databases.
In numerical programming, state is less obvious because it depends on how you do the calculation.
In numerical programming, I tend to use a functional style that makes use of objects. E.g. instead of
y = f(params, x)
I would write
y = my_object.f(x)
where my_object encapsulates params. This makes sense as the only function that needs to access params is f. On the other hand Python has no private members anyway so I would probably use the first way for Python anyway.
I liked this a lot, but do we think all those steps are needed? I feel like it can definitely be simplified, and that this is even scary for people new to Python and OOP.
I suspect that you have some amount of experience that allows you to just do many of these things intuitively (possibly in your head without writing them down). For a beginner, I think it can be helpful to have a detailed procedure to follow that takes them from start to finish. That way they can focus mental energy on the problem at hand while not having to worry that they may be working through the problem using a method that is inefficient or that may never get results.
I really don't think it's the physical process, but the kind of detailed analysis of your problem and the requisite thinking to form a solution that the described process requires.
Good OO programmers do some internal version of this process every time they design a class. For those that have been at it awhile, this process is probably so internalized they hardly even notice it. The OOP equivalent of muscle memory.
For example, the value in "underline and review the nouns" isn't underlining and reviewing the nouns, it's a physical manifestation of an important step in the OO development process. Namely, helping to identify requirements.
Have I ever gone through such a tedious process when designing a class? Of course not, but the steps outlined in the answerer's post take place in some form internally every time I need to design a new class.
No, it doesn't mean that OOP is awful - it just means that this particular way of modeling a problem is a recipe for over-engineering.
Start with the simplest thing that works. That thing will probably be just a function. Grow it from there. If it gets too big, split it. If you find that you pass the same 12 parameters to a bunch of functions - factor out a class. If you do the same thing in a bunch of classes - abstract it out. Keep it DRY [3]. Keep it SOLID [4]. Rinse and repeat. This way you end up with a useful class hierarchy - and OOP won't be awful.
[1] http://en.wikipedia.org/wiki/Tower_of_Hanoi
[2] http://rosettacode.org/wiki/Towers_of_Hanoi
[3] http://en.wikipedia.org/wiki/Don't_repeat_yourself
[4] http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)