The suggestion is that thinking in OO terms makes you think about architectures instead of programs, and the result is to move you away from thinking about the problem and the solution and towards thinking about the program's organization. This is always assumed to be a good thing, especially when accompanied by the usual examples of writing programs for teams of programmers with varying levels of skill.
But if we grant that non-OO is better for someone learning to program, why wouldn't it be better for someone reading a program for the first time?
EDIT:
Thinking further about this, I am not against the idea of design considerations that should be kept from the beginning programmer. But OO isn't really a design consideration, it's a metaphor.
If it really "worked," then new programmers would be looking all confused trying to write a Towers of Hanoi program, and you would explain, "Think of each tower as its own thing. What can it do? What does it have?" And slowly you could tease out a program by designing objects from the ground up.
But if it actually doesn't work to teach programming using objects, if you teach programming without objects and introduce them as an 'advanced' subject, then at some level you have to wonder if the metaphor is fundamentally broken. If the purpose of OO is design and organization rather than a fundamental way to think about programs, then really we shouldn't say things like "Everything's an object."
We should ask what we need to do to design well-factored programs that are cohesive without being coupled and then design language features that directly address those organization requirements rather than thinking that there is this obvious "metaphor" that naturally leads to well-organized programs.
"We should ask what we need to do to design well-factored programs that are cohesive without being coupled and then design language features that directly address those organization requirements"
Language designers have been chasing that chimera for decades now. An unfortunate (in my mind at least) side effect has been legions of new programmers that are ready to fight to the death over concepts (separation of presentation and logic being an example) that are implied to be the One True Right Way when in my experience most programming rules should not only be broken on occasion but frequently ignored wholesale.
"rather than thinking that there is this obvious 'metaphor' that naturally leads to well-organized programs."
Herein lies my greatest criticism of OO and it's proponents. There is nothing inherently "obvious" about the metaphor in general and in my opinion adding a layer metaphor on top of an already complex process simply adds further complexity without bringing anything to the table.
I found your comment much more provocative than the article itself :)
Object-Orientation exists in several different places, each of which is more or less important.
As a code organization tool, it really shines in large projects. So if you're learning to code, probably don't need it. Code organization gets much more rigorous in larger projects. Many times noobs with small programs start applying large-project concepts to their ten-line algorithm. This is a failure of teaching -- but probably intermediate-level teaching.
As a metaphor, it provides a way of looking at "where" to put your code. Noobs don't have a lot of code, so it doesn't matter. I agree that for small problems this extra layer can be distracting.
But the most important part of OO has nothing to do with programming: it's OOA. This skill lets you look at your problem, and it's the most powerful thing I've experienced for solving large problems. Functional Programming puts all the symbols and problem-space in your head at one time -- you become the linker. OOA lets you play with ways to solve a problem without writing any code and without taxing your noodle at all. You deal with small, atomic blocks and their relationships with each other. This skill is an absolute necessity, even if the student never touches an object the rest of their career. It's also one that's the most lacking in regular programmers.
Functional Programming puts all the symbols
and problem-space in your head at one time
I feel like the natural response is "then you're doing it wrong", but I think that's too dismissive. My comments do not apply universally across so-called functional languages, but I think it holds as a point of note. That is, when thinking about a system functionally one tends to conceptualize the data in the system and how it gets from looking like one thing to looking like another. How it gets from point A to point B are matters of detail. So while I understand what you mean, I'm not sure that it's emblematic of how functional thinking is actually done. Functional thinking is less about functions than data.
By "link" I specifically meant data, not functions. Because everything is potentially visible, FP requires you to keep track of data structures in your head -- sometimes complex data structures with non-trivial relationships to other data structures.
Of course, there are methods to keep it simple -- starting small and growing as needed comes to mind first -- but if you're writing FP in a legacy environment with lots of nasty complex data structures you have a big headache. This would not be true in the same data environment with a well-constructed OO tier. [insert discussion here about what makes a well-constructed OO tier]
Or maybe I missed it. If I did, please straighten me out.
Your comparing "nasty" fp with "well-structured" OO, but let's ignore that for the moment.
My experience is the opposite. I run into a lot of nasty legacy OO code (think java not smalltalk) and it's very hard for me to refactor it piecemeal. I've been doing a clojure project at work for the past 6 months and it's surprisingly easy to refactor side effect free code.
but if you're writing FP in a legacy environment
with lots of nasty complex data structures you have
a big headache. This would not be true in the same
data environment with a well-constructed OO tier.
With fp languages the tendency is to think in data and abstract via functions. If your functional tier is well-constructed then all is good right? [insert discussion here about what makes a well-constructed functional tier] ;-)
Pretty much all recent functional languages deal with oo programming by using multimethods and namespaces. This takes care of the problem you're talking about in a beyer way than most oo languages.
I don't know that OO's main goal is to be readable, but rather to manage complexity (which should help readability, but I don't think that's the focus). There's an inherent level of complexity in OO that isn't necessary when you're solving basic, learner type problems, but it can help keep complexity down when you're solving problems that have solutions that can't be kept in your head at any one time.
To put it differently, think of an algorithm that's constant time, but has a very large constant and an algorithm that's O(n). The constant time algorithm will be worse for small sets, but better for large sets (once n > that constant). That's the problem with teaching OO to people new to programming. You're adding an additional thing to an already difficult topic, and that thing you're adding doesn't really benefit you at that level of problem, but get to a bigger problem, and it starts to make sense.
I agree, but it's good to step back and think why that's true.
The real value in OO programming is being able to reduce complexity by reusing code in a new ways. Either by creating several instances of the same object that all manage a complex internal state, or by inheriting from another object and extending functionality. I need to add a window, I already have lot's of windows, but this one is different because X works in OO. I need to build my first window starting from scratch and suddenly OO does not give you any leverage.
Which is why OO seems pointless for most being programmers, they can't deal with sufficiently complex problems to gain anything from OO programming. However, you don't need to understand object's to use them. It takes a while to understand why you can keep adding <<'s after cout, but most beginners quickly prefer it over sprintf.
Which is why OO seems pointless for most being programmers, they can't deal with sufficiently complex problems to gain anything from OO programming.
Beware, this is not biconditional. I'm sure you won't have to go very far to find programmers who deal with complex problems but who do not feel that they gain anything from solving them using OO.
And there is also the argument that OO as implemented in popular languages introduces its own complexity, such that it ends up being used to solve the problems it introduced in the first place.
Complexity comes in many flavors. One of those flavors is mutable state. Reducing that makes OOP better. You can have your meta meta, and still be sane in your reuse of existing code. Keep the object abstraction, just limit it's scope overall.
Many of the language and library developments of the last several years of Python have been in making the language better at expressing concepts and control flow in terms of functions which pass around simple data types, which tends to make code less object-y.
For example: context managers and decorators make it easier to change code behavior in a region of code or a function body; generators make it easier to iterate through collections of things in a declarative way, storing state without explicitly adding it to an object; named tuple objects help in creating simple data types without resorting to heavily customized objects.
Personally, I find in Python that objects are a good and fairly intuitive way to describe data structures or anything needing many similar copies with complex stored state but where each copy is performing some particular action: e.g. representing the users logged into a chat server.
If the purpose of OO is design and organization rather than a fundamental way to think about programs, then really we shouldn't say things like "Everything's an object."
If you are an ardent fan of OO, you might enjoy spending some time with modern functional programming in clojure. OO has become the common means by which we achieve encapsulation and polymorphism, but these features can be accomplished by other means which have a different (and to some, better) feel.
When we say "Everything is an object", the core sentiment is that "Variables behave similarly". It's jarring to use object.toString() sometimes and "" + 1 others. To (approximately) fix this in an "code belongs to objects" universe, you must make everything an object (and have lots of inherited interfaces). In a functional universe you can achieve a more satisfying fix by having multiple implementations of the same method. (str object) and (str 1) behave similarly without imposing interface requirements on the different types. To have large scale projects, this requires that the str function be implemented in different places and dispatch correctly, which is a sophisticated language feature that isn't available in all functional languages.
I have long been a fan of python and The Good Parts of JavaScript. They have their problems, but they have served me well thus far. But OO is actually fundamentally flawed, and you only need to look as far as the Java library situation to see how exponentially complex strongly typed and independently developed OO can get. Duck typed OO has tried to answer the call, but this still results in duplicate, inconsistent interfaces monkey patched at runtime to achieve a measure of sanity.
Clojure is really changing my opinion - you get great performance (compared to the majority of languages), optional static typing, and significantly less crufty code. Whether it will age well is of course unknown, but so far it beats the pants off of any other language I have used (including scheme and Common Lisp).
In Common Lisp we have optional type declarations, and a common strategy for implementations (for SBCL, say) is to treat such declarations as assertions by default, leading to more safety, and for performance-sensitive parts of the code you can "(locally (declare (optimize speed (safety 0))))", which makes the type declarations act as performance optimizations for that part of the code.
In your last two paragraphs of your edit you seem to have sort of slipped in an assumption that our designs should be constrained to at least some extent by what we can teach new programmers in their first few weeks? OO can still be a helpful metaphor, while at the same time perhaps not being the very very first thing we should teach. For instance, perhaps we should not teach the solutions of OO until the student has gone far enough to encounter the problems it solves, to at least some minimal extent. (I include the question mark on purpose, I doubt this is exactly what you meant.)
As a bit of a sidebar, I'd also observe that the metaphor we ostensibly want to teach our newbies, about how objects are nouns and methods are verbs and it can model the physical world and lets go talk about class hierarchies involving animals and vehicles, has fallen out of favor in the more advanced OO circles anyhow, since it is, not to put too fine a point on it, a useless way to design real programs. This would also seem to suggest that perhaps OO isn't necessarily the best thing to go out of the gate with.
What prompted this wandering, incoherent mumble was wondering if "what we can teach new programmers" is somehow related to "what is readable by unskilled programmers new to our programs." If OO gets in the way of new programmers, I was wondering if it would get in the way of the mythical maintenance programmers trying to figure out what our programs do and how to modify them.
I don't really have a full conclusion, but if we grant that it is a useless metaphor, and just a tool for organizing programs, I wonder if it ought to be relegated to a corner of most languages rather than being put front and center.
I was wondering if it would get in the way of the mythical maintenance programmers trying to figure out what our programs do and how to modify them.
This seems to me to be the point at which your idea goes strange. Why assume that maintenance programmers are interchangeable monkey-cogs with minimal understanding of software development and design? Perhaps we'd have better software if we treated maintenance and ongoing development as tasks requiring just as much, if not more, talent and knowledge and skill and practice as green field development.
(I understand that soi disant architects might disagree, but I ignore architects who don't code.)
"Thinking further about this, I am not against the idea of design considerations that should be kept from the beginning programmer. But OO isn't really a design consideration, it's a metaphor."
I would think of OO as just another tool that we've created over the years to help make programs. It's a big tool, with a lot of history to it, and that people used to think was the ultimate all-in-one tool that you needed to use for everything (nowadays less people think that).
But it any case, when you teach someone something new, you can't give them too many tools at once.
So while I agree with a lot of the sentiments you expressed, I don't think that what you do or don't teach new programmers has that much bearing on what you should or shouldn't use. After all, in your first lesson you probably won't be using loops either.
OOP is about (subtype) polymorphism, or more precisely about functions with polymorphic behavior in regards to the (implicit) parameter. Everything else is there to support and make it easier to apply this concept.
Of course you don't need OOP for every problem, because you don't need polymorphism for every problem. And applied improperly, every type of polymorphism (whether it's ad-hoc with early or late binding, or parametric) is broken, not to mention hard to explain.
And as an exercise, I want you to count the number of applications where you haven't used any type of polymorphism. The only ones I can count myself are the small exercises of the types I used to do in high-school.
What are we to make of this quote: "OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things."--Alan Kay
Do we interpret that as suggesting that polymorphism is the thing and that Alan is using different words to describe it? That polymorphism is an implementation technique for achieving the goals Alan is describing? or perhaps that in the thirty years since Smalltalk has been commercialized, we have moved on from its initial vision?
Alan Kay also said: "I made up the term "object-oriented", and I can tell you I did not have C++ in mind."
Going by Kay's original definition, Java, C# and C++ do not qualify as object-orientated languages. I think we may have to accept that the original term has become somewhat diluted over time.
However I don't think OOP is all about polymorphism, because you can have polymorphism without objects (e.g. Clojure's protocols or Haskell's type classes).
the original term has become somewhat diluted over time
It's not the original term. OOP started with Simula 67 and then split into 2 branches: the Alan Kay brach and the Barbara Liskov branch.
Alan Kay coined OOP for his branch, but the name caught on. So now OOP can refer to any descendent of Simula 67. Even ones he didn't have in mind (C++ is on the Liskov branch).
The thing is: the branches aren't really compatible philosophically. That's a big reason for the amount of confusion that exists.
In Alan Kay land objects are sub-computers that receive messages from other sub-computers. In Barbara Liskov world objects are abstract data with operators and a hidden representation.
But OOP isn't really a type of polymorphism, any more than a Ford Focus is a type of internal combustion engine. You claim that:
> OOP is about (subtype) polymorphism... Everything else is there to support and make it easier to apply this concept.
But if that were true, why does your typically OOP language handle polymorphism so poorly? If your only goal is subtype polymorphism, why not just have multimethods and type inheritance and be done with it? Why bother with encapsulation? Why only have single-dispatch?
I haven't said that OOP is a type of polymorphism, I only said that OOP has a type of polymorphism, that is its most important property and easy to overlook when talking about OOP.
I'm also not saying that you could throw away the rest - thinking in objects has value, having the ability to inherit implementation has value (in certain cases) and so on; so definitely the whole is bigger than the sum of the parts.
The comment I responded to said OOP is a metaphor ... but now you're saying it's like a Ford Focus with a certain type of combustion engine. That's exactly my point - it's easy to overlook what makes it tick.
And you asks questions to which you (probably) know the answer.
Multimethods are sweat, but a bitch to optimize and only useful in certain use-cases - that's why Clojure is backing away from them. Single-dispatch is what you need in 80% of the cases and it can be optimized aggressively ... notice the JVM where most method-calls end up not doing a vtable-lookup (not that you can have a vtable with multimethods).
C++ / Java handle OOP so poorly because these are languages with a static type system that can't blend in with OOP (where method dispatching is done at runtime). But that's what people asked for.
"messaging" / "extreme late-binding of all things" - this is about polymorphism.
"local retention and protection" - encapsulation is THE goal of every engineer (i.e. black boxes that take some input and reliably give you some output, without you worrying about the details) and is in no way an exclusive property of OOP, although thinking about data-structures as objects that respond to messages gives you powerful tools to do that.
EDIT - and thinking about beginners that can't understand OOP, encapsulation is also hard to explain, but that doesn't make it any less of a useful goal.
"Messaging" also means loose coupled. It enables things like Structural interfaces (you implement what you implement) like in Go, instead of declarative interfaces (you implement what you say you implement) like in Java/C++/C#.
"Messaging" and its loose coupling also means the ability of doing Method Lookup Alteration and Interception [1], such as those provided by Smalltalk’s doesNotUnderstand, Ruby’s missing_method and Python’s __gettattr__ method. This is crucial because it allows you to compose objects, as they are strippable down to the Common Lisp Object System system: an object is a function that takes a message specification (method name and args) and decides what to do with it.
Yeah. Messaging is really important and powerful. It also explains why Alan Kay also said: I coined the term object-oriented, and C++ wasn't what I had in mind.
"Messaging" is polymorphism in action and a way to describe it to people without headaches by concentrating on benefits and use-cases, instead of implementation details.
And to be a little pedantic -- virtual method calls in Java/C#/C++ are also messages.
What Alan Kay meant was that a static type system doesn't blend with the OOP he enabled in Smalltalk. Do you know why?
Because in OOP polymorphism, method-dispatching is late-bound. And being late-bound it makes absolute sense for the developer to be able to override the method-dispatching that's being done (by means of method_missing and other mechanisms). But that's directly at odds with the purpose of a static type system.
And btw, structural typing is not dynamic typing. It has the same problem as the one described above.
A lot of the important concepts we'd consider "object-oriented" are implicit and will likely be picked up intuitively.
If you're writing python functions that return ADTs like dictionaries, sets, and lists, you're already using OO programming. Your function is a constructor, and you get back a collection of data with a set of methods that can operate on that data.
I wouldn't say that "non-OO is better" for a beginner, just that there isn't much advantage to a peculiar focus on object-orientation, especially as a program design philosophy.
OOP describes the real-world in the same way that the metaphor Raganwald is a diamond in the rough describes your ability to act as a piece of jewelry.
Speaking from personal experience, when I transitioned from C (first language that I was three months into) to Objective-C, I couldn't STAND the number of cutesy metaphors used to describe what objects were and how they functioned: "Like, you can send the duck a message to swim, or to fly." "Ducks inherit from birds."
Coming from a highly plausible programming world of data and operations on data, this came off as nonsense. I wanted to tear my hair out finding a sober explanation of what an object actually WAS, practically, in practice instead of hearing them referred to as things that could be told to do stuff. Apple's Object-Oriented Programming was by far the most enlightening document in my early days:
> Coming from a highly plausible programming world of data and operations on data, this came off as nonsense.
It is nonsense.
On IRC the other day, I said this:
19:19 < xentrac> I propose a new rule for discussions of object-oriented
programming
19:20 < xentrac> which is that anyone who brings up examples of Dog, Bike, Car,
Person, or other real-world objects, unless they are talking
about writing a clone of The Sims or something,
19:20 < xentrac> is immediately shot.
You should propose alternatives that are similarly universally familiar.
Let's not forget that a Dog is not a dog; it's a word for a dog. A Bike is not something you can ride around on; it's a name for a concept you have in your head, by which you can classify real-world instances. Ceci n'est pas une pipe. I've noticed a lot of people who dislike OO seem to have a very distorted idea of what OO is and isn't...
Discussing about real-world objects makes it easier to grok what polymorphism is about - subtype polymorphism being almost equivalent to OOP, its most important property and the hardest to understand.
Exemplifying with a Duck that's also a type of Bird is good - many times you don't give a fuck that an object is a Duck, all you care about is if it can fly or not.
And to replace these examples, how awful they may be, you've got to find other alternatives.
The trouble is that, in good OO programming, we don't make class hierarchies in order to satisfy our inner Linnaeus. We make class hierarchies in order to simplify the code by allowing different parts of it to be changed independently of each other, and to eliminate duplication (which comes to the same thing). Without any context as to what the code needs to accomplish, you can't make a judgment about whether a particular design decision is good or bad.
Here are some real-world examples (drawn from my own code, sorry):
• A Timer is a horizontal strip on the screen with a stripe racing across it. A NumericHalo is a spreading ripple on the screen that fades. A Sound is a thing on the screen that makes a sound when a Timer's stripe hits it. A Trash is a thing that deletes Sounds when you drop them on it. They all inherit from Visible, which represents things that can be drawn on the screen and perhaps respond to mouse clicks or drop events, but they do different things in those three cases. In addition, the Trash and Sounds are subclasses of ImageDisplay, because the way they handle being drawn is simply to display a static image, so that code is factored into a superclass. http://www.canonical.org/~kragen/sw/pygmusic/pygmusic.html#V...
The next two are only reasonable examples if you know the λ-calculus:
• Var, App, and Ind are three kinds of Nodes representing an expression graph: variables, applications, and indirection nodes. But in evaluating an expression using combinator graph reduction, sometimes you need to mutate one kind of Node into another. So instead of making them subclasses of Node, there are Strategy objects called Var_type, App_type, and Ind_type, and Node objects delegate almost all of their behavior to that Strategy object: http://lists.canonical.org/pipermail/kragen-hacks/2006-Novem... Note that this allows the code to later extend the combinator graph type with λ abstractions by adding another Strategy object.
• Doing combinator-graph reduction directly on λ-expressions, you have three classes of graph nodes — variable references, λ abstractions, and function applications. All of these types support a common "graph node protocol", which recursively generates printable representations of them in various formats, such as Scheme, TeX, plain ASCII text, or Perl. In a language like Java, this would be an interface, but in Perl, protocols don't have an explicit representation, so these classes don't inherit from anything in common. These classes implement an additional protocol used for β-reduction, which consists of two methods, "occurs_free" and "substitute". However, β-reduction to normal form requires identifying β-redexes, which is not something that a single expression graph node can do on its own, so this is done by functions that violate encapsulation by making explicit is-a checks. http://lists.canonical.org/pipermail/kragen-hacks/1999-Septe...
The next example is only good if you already know basic algebra:
• The Formula class overloads a bunch of operators to make it convenient to combine formulas into other formulas, and it has subclasses Binop, Unop, Variable, and Constant that represent particular kinds of formulas. These subclasses implement methods derivative (you know, taking the derivative, calculus), identical, simplified, and eval, as well as coercion to strings. Derivatives and simplification in particular are wildly different for different kinds of binary operations, so you could imagine making subclasses of Binop for the five binary operations supported, and that would probably be better than what I in fact did, which was to use dictionaries of lambdas for evaluation, derivatives, and simplification. This would involve replacing instantiations of the Binop class with calls to a factory function. http://lists.canonical.org/pipermail/kragen-hacks/2001-Janua...
• The Subfile class implements most of the same interface as Python's built-in file type, but provides read-only access to only a small byte-range of a file. This means you can pass a Subfile object to a function that expects to be passed a file object, and it will work on that small part of the file. This allows the `_words` function (search for `def _words`) to be written as if it were reading an entire file, separating out the tasks of checking for the end of the byte range and breaking a file into words into two separate objects. http://lists.canonical.org/pipermail/kragen-hacks/2006-Augus...
• UI interacts with the user of a chat client; icb interacts with a chat server; httpreq interacts with a web server. All three of them implement the protocol that asyncore expects of "channel" objects, which involves methods like readable(), handle_read(), writable(), handle_write(), and so on — basically things that manage event-loop I/O on a file descriptor. They also all inherit from asyncore's `dispatcher_with_send` class, which provides default implementations of some of these methods. Two of them also inherit from an `upgradable` class, which provides an implementation of dynamic code upgrade — replacing their current code with code newly loaded from the filesystem. (This allows you to hack the code of the client without closing and reopening your chat connections.) http://lists.canonical.org/pipermail/kragen-hacks/2002-Augus...
(Relevant drive-by dismissal: http://www.netalive.org/swsu/archives/2005/10/in_defense_of_... "this article [by Fowler] finally made me understand what OOP was all about. This is not true for many other articles and yes, I'm looking at you, shitty Car extends Vehicle OOP tutorial.")
The problem with the "Duck extends Bird" kind of example is that it gives you no understanding of the kind of considerations you need to think about in order to decide whether the design decisions discussed above are good or bad. In fact, it actively sabotages that understanding. You can't add code to ducks; you can't refactor ducks; ducks don't implement protocols; you can't create a new species in order to separate some concerns (e.g. file I/O and word splitting); you can't fake the ability to turn a duck into a penguin by moving its duckness into an animal of some other species that can be replaced at runtime; you can't indirect the creation of ducks through a factory that produces birds of several species, and even if you could, the analogy doesn't help at all in understanding why the analogous thing might be a good idea in the Binop case; penguins don't implement the "fly" method that can be found in birds; whether you consider ducks to be birds or simply chordates does not affect the internal complexity of ducks; and you don't go around causing things to fly without knowing what kind of bird they are. (Ducks themselves decide when they want to fly, and they certainly seem to know they're ducks and not vultures.)
So I disagree that the analogy "makes it easier to grok what polymorphism is about". It's misleading; it obscures the relevant while confusing people with the irrelevant.
Oh, wait. That works. I just think that the lambda-calculus example may be a bit advanced for a programmer who's only starting to learn what polymorphism is. Not every coder starts off with Scheme.
Person has a GunShotWound, which is a subclass of Wound. (cf. AbrasionWound, LacerationWound, etc.) Also note the new properties such as bulletType, entryPoint, and fragmentationPattern.
Actually, I'll be honest: I quite like dependency injection. There are a lot of really good things that can be done with Guice. I've written one for C++ here: https://bitbucket.org/cheez/dicpp/wiki/Home
When I taught myself to program, I learned a functional language, in no small part because the language that I was learning was not object oriented. After some time, my code was starting to turn into spaghetti, so I started enforcing a rule where I put functions that operated on like data in separate files, and created abstractions so in my primary program flow I could work at a high level to keep things clean and understandable. When I was taught about objects they weren't very easy to understand, but when I connected the dots and saw it was what I was already doing, plus state, it all came together. I don't think that most educators do a good job explaining how OOP can really help you, they just start rambling about re-usability and polymorphism. I knew how to program, so I could understand why you would want these things, but to a newcomer they must seem meaningless and unapproachable.
If I were teaching, I wouldn't start new programmers with OOP, but I would bring them there quickly. Imagine an assignment where you have the students model a car, a stop sign, and a traffic light, and their interactions on a closed course. I'd have the students model the interactions using ruby hashes (e.g. car = {:color => 'blue', :state => 'stopped', :stop_time_required => 1}, traffic_light = {:color => 'red', :stop_time_required => 30}), and a series of functions for the first assignment, where the program prints what happens as it encounters a pre-defined course. This should result in some pretty crappy spaghetti code. For the next assignment, I'd have them clean up the structure of the program, separating the code into logical units based on what the code is operating on, a car, a red light, a stop sign. Lastly, I'd introduce OOP and have them update the code for OOP. It should take a lot less code to do the same thing, and it should be a lot cleaner. Polymorphism can be addressed by instructing the students that red light and stop sign are both traffic control devices, and having them implement to_s on each class to clean up their string handling code.
OOP can make lots of sense when it's presented as a time saving measure. However, especially in Java classes, it tends to start will a wall of code thirty lines long to print "Hello World", with the direction to ignore everything else. It would probably be better to start simple, with a 1 line program (e.g. puts 'Hello World'), and work up from there. That's not to say Ruby is the right language, it's probably a good starter language, but a strictly typed lower level language would probably be better once they get the hang of things. The beauty of ruby is that you can create useful code quickly, but you still need to be able to understand what's going on under the hood.
The two classes I learned the most from were a digital hardware class, and a assembly course. In the hardware course, the professor gave me a FPGA to work with after he saw I understood digital logic very well, and on that FPGA I implemented a basic instruction set, including jump, which could be entered using toggle switches, executed using a push button, with results output to a lcd. In the assembly course, the professor spent a great deal of effort manually compiling c to assembly, and explaining why. He was on the ANSI-C committee, and worked full time writing assembly for spacecraft at NASA, so offered a very unique prospective, and quite a bit of insight.
Objects are not ways of solving problems. They are a metaphor, which gives you a handle into the problem space.
OO design is building a metaphorical data storage mechanism for your system, and then you need to cross the data into the metaphor and back again.
I wasted countless hours when learning programming trying to wrap my head around coding in an OO fashion. Eventually I integrated enough OO to make my programs better. But I never really got 'into' OO, although I tried real hard!
I don't have good words yet to explain the problem in detail, but fundamentally, a metaphor is not the solution. Most discussions of object orientation badly confuse themselves with that. As an example - an interface (ala Java/C#) is not an interface as a hardware engineer would speak of it: an interface from the OO perspective is some sort of promised type or 'protocol'.
Are you building a model of your problem? Or are you solving your problem. I'm willing make a small wager that building models of your problem doesn't pay off until you really start scaling your system in some direction. If we use the Pareto idea, only 20% of your systems actually need the heavyweight approach - the other 80% can be put together without the heavy object-oriented machinery.
Anyway, I'd rather be given the opportunity to teach in Scheme or other functional untyped language, instead of Java or C++. :-)
couldn't agree more. i started with java in high school and was too caught up in "public static void main" to realize that it was up to me to define methods that made sense and that things generally ran top to bottom. switching to php for a while made everything "click" before heading back to java
getting rid of CS terms and data types allows you to learn how to think in program flow and "what am i actually doing here". failing to capture the attention in this first step is lethal to most people that otherwise would be good at programming if they hadn't run quickly at their first try
That's precisely why I think Java is a horrible language to start people out in.
Well, not language. Library. It's abstracted to the extreme, and confusing as heck to anyone just starting to learn how to think about how to program.
Start people out in something simple. Ruby can teach bad habits, so don't tell them about monkey patching, but it's so simple it's a great way to hook people, and has simple console / file IO that'll let newbies actually do something with their skills, instead of being glad their project doesn't overflow its array bounds or get stuck in hideous C input handling deathtraps (which flags do you have to reset in the input stream if they start inputting Japanese? I forget...)
Best of all, despite being OO, you don't need a class to start doing things. Something Java gets wrong for beginners.
#!/usr/bin/env ruby
puts "hello world"
Classes can wait, they're a whole can of worms that should come after people learn to think clearly enough for a computer to understand them.
If you're burnt out on using Java for introductory courses, consider using Groovy. (http://groovy.codehaus.org/). It takes a lot of the pain and boilerplate code out of java, making it easier for students to focus on what they're actually doing and not writing "public static void main String args" all of the time.
Similar to your example, this is a valid groovy program:
println "Hello!"
But, unlike Python or Ruby, you're still in the Java world so migrating back to to pure Java is a lot easier. Though, the shrieks of pain you'll endure when you explain why they now have to write constructors or getters and setters might hurt. :-)
Have seen it, but never used it. It looks decently active on the forums, despite the home page showing an award from 2007 (and looking older than that)... how's the community / code health? On something like a .NET-to-Ruby/Python scale? I'll keep it in mind for any future Java necessities.
I agree with your comments to some degree. My first programming course was in Java. I definitely did get a bit confused in the whole "public static void main" but at the same time, when I moved to python, it was trivial to learn, and I felt like I had picked up some important concepts along the way. You can get lost in the whole architecture thing, but it can help you understand design abstractions as well.
Interesting, but one could go further with this: there still seems to be the assumption that learning OOP concepts is indispensable and necessary in the long run.
This may or may not be true depending on the application area, language and intent of the programmer. Some languages and frameworks make it impossible to get anything done unless you know how to subclass things; at the other extreme you've got Stephanov-inflected C++.
Arguably you would have people learn at least a little more about algorithms and data structures of greater complexity before dropping the OO-hammer. I still have moments where mid-way into astronauting up a class hierarchy I realize that all of this could be done more clearly in 10 lines of STL/Boost (and yes, I'm aware of the problems of that style of coding, too).
You misunderstand. I'm merely using STL/Boost as one example of advanced non-OO programming. Many other such paths to sophisticated programming exist that don't go through OO, including the ones you name.
I use STL/Boost in this case not because I think they're wonderful but because they're realistically what I'd have to use, as our codebase is mostly C/C++.
I don't think objects unavoidable lead beginning programmers to get lost in "architecture."
My go-to language for teaching new programmers is Ruby--in part because it's my strongest language, but also because it's capable of expressing most programming problems intuitively.
Is Ruby object-oriented? Yes and no. Certainly, objects are core to its design. But it's also a scripting language that can be used purely imperatively. Or you can use it like a functional language.
What makes it so good for beginners, in my opinion, is that in Ruby, things are objects that naturally seem like objects. For example, it's very natural to think of a string as an object that can be passed around and manipulated, and Ruby makes this totally intuitive.
Writing "Hello world".downcase.sub('hello', 'goodbye') doesn't tempt beginning programmers to create bizarre class structures or reams of boilerplate code. But it does make you feel empowered, and because it's so darn logical, it makes programming seem way less scary to a beginner.
Agreed. It's the hogs like Java, C# and C++ that gave OO a bad name. Moreover, in dynamic OO languages like Ruby, even scary things like design patterns usually take lot less time and lines of code to implement and, later, understand.
I'd actually skip on Python and give them Scheme with a copy of "The Little Schemer."
However, I agree with the OO sentiment. It brings a lot of vernacular and concepts that are not important to the building of a program; at least not at first. Classes, objects, meta-classes, inheritance, multiple inheritance, method resolution order, operator over-loading, class methods, static methods... it's all baggage for a new programmer. It's an interesting way of organizing code and encapsulating the responsibilities of different parts of a larger program... but it's a purely architectural tool.
For a beginning programmer they aren't worried about such design concepts. They just need to understand inputs, outputs, and control flow. A single function is a program for them. Once they have a grasp of the fundamentals then you can think about introducing them to OO programming.
Beginner's mind. It's hard for an experienced programmer to grasp.
Its like trying to teach someone basketball for the first time and at the same time trying to explain a pick and roll, a full court press, Triangle offence etc. You'll just confuse them.
Teach the basics and slowly ease into the big picture/strategy stuff when they have a handle on the fundamentals.
It's a question of scale. Modules become increasingly important for larger projects. It's a form of infrastructure, ridiculous for a birdhouse, essential for a city.
I wonder if part of the problem is that objects and classes are additional concepts to understand, on top of statements, variables, functions, modules, data types, if, while, for, and so on. In a language like the ς-calculus, you wouldn't have to introduce objects and classes separately from those other things.
Object Oriented programming exists because it is how humans conceptualize the world around them. Humans think in objects and actions on, in and between objects. OOP is a translation of natural thinking into systems and behaviors. Why is this a bad thing, especially for learning?
I guess my question is, why does OOP and an natural translation between the real world and the code world lead to this discussion and a certain level of condescension around utilizing the concepts of OOP?
The way humans conceptualize the world is miles away from how computers work. We (or at least I) conceptualize the world in terms of relationships between things we encounter, but the focus of OOP is designing individual classes and their hierarchies.
But note that, in everyday life, relationships exist mostly between objects that would have been instances of unrelated classes in OOP. For example, my SCREEN IS ON TOP of the TABLE; I (a human being) AM SITTING on a CHAIR; a CAR IS ON the ROAD; I am TYPING on THE KEYBOARD; the COFFEE MACHINE BOILS WATER.
We think in terms of bivalent (sometimes trivalent, as, for example in, "I gave you flowers") verbs where the verb is the action (relation) that operates on two objects to produce some result/action. Contrast that with OOP where you first have to find one object and tell it to perform some action on another object. The way of thinking is shifted from actions/verbs operating on nouns to nouns operating on nouns, which is highly unnatural for me. It is as if everything is being said in passive voice, e.g., "the keyboard is being typed on by me".
I'm not sure that's true. In OOP, a methods belong to objects. In a natural language, a verbs don't really belong to nouns. To me, the structure of OOP is very different to how people typically think.
Better yet, teach them a language which doesn't even have the "extra layer of [OO] nonsense". Teach them something simple, clean, and logical. Teach them Haskell.
Not a monad in sight. Yes, print is an IO Action, but you don't have to use monads to do IO. Add in 'interact' and a you can be writing trivial stdin / stdout programs very quickly.
The first point is not too much of a downer, actually: it's not much harder to learn the IO monad as pure syntactic sugar than to learn printf and its ilk.
As someone who gave Haskell a shot, I was very quickly put off by having to learn functions to convert from integers to floats, while such things were automatically taken care of by more low-level programming languages such as C.
Also, I was confused by the package management with Cabal, and now I appear to have broken it with an error message whenever I start ghci. When choosing a first language for students, you have to think of idiots such as me.
Where exactly were you mixing integers and floats? In my experience it doesn't come up too often, and if it does (e.g. writing area = 2 * pi * r) you'd be better served changing the 2 to 2.0.
I sometimes want to avoid using floats because of the inaccuracy that sometimes arise with their use. If I recall correctly, in Haskell division requires the numerator and denominator to be of the class fractional. So how I did integer division was something like this
round (fromIntegral x / fromIntegral y)
round is one of several functions that transform numbers to integers.
Haskell's doing the right thing here by having a separate function for integer division. It makes no sense to overload / to mean both regular and integer division.
A reminder, The the objects are only in your mind not in the computer. They exist only as a metaphor to allow you to wrap your mind around programming easier, Rarely does the easier path yield any benefits except shorter travel time.
That's hugely debatable, actually. If you ever called a command-line application in one of those scripts, you called what's essentially an object - it encapsulates its own behavior, hides how it does it, you can create multiple ones (instances) without them competing or sharing information, the command line arguments are arguments to the constructor, etc. They're nearly identical.
OOP concepts aren't what's getting in the way. It's the boilerplate most languages require for their preferred object systems. A good language for beginners would have to make it easy to progressively encapsulate things as the program grows, without requiring substantial rewrites. It should be very easy to go from a few named variables to an array or dictionary, and then to add methods to that to make it a proper class.
People often ask what's the single biggest difference between a good programmer and a great programmer.
I've heard a lot of good answers dealing with talent, hard work, experience, imagination, communication, cleverness, simplicity, vision, and even laziness.
The shift from procedural to OO brings with it a shift from thinking about problems and solutions to thinking about architecture.
This makes me think about my answer: You first enable yourself to become a great programmer the moment you stop worrying about your own problems and focus primarily on your customer's.
Despite the other comments, I think you make a very valuable point. Take out the "customer" angle and I think they'd also agree. It's really thinking about the problem, working the problem and solving the problem. Thinking about and working on your toolset is not solving the problem. Whether the problem is yours or your customer's makes little difference.
When you focus primarily on you customer's problems and forget your own problems, you tend to give your customers something that's not what they really want.
You have to think about your own problems in the context of your customer's needs. You are you, your customer is your customer. Each to its own.
I'd say many of the best programmers never had a customer, and therefore only had their own problems to worry about in the first place. The person you're describing is a great businessperson, but that's neither necessary nor sufficient for being a great programmer.
As someone who is just making the transition to OOP in Python I totally agree with this post.
I started with PHP:
Learning programming syntax was really annoying
then learning loop and nested loop structures was hard
then learning a set of useful built in methods was hard
then learning how to interact with other systems was really hard
All the time I was learning this basic stuff I was writing code primarily for myself. I was the only one who maintained it and had to use it.
Now that I've moved over to python and I'm working on a code base that I expect to last, and will get to work with others on, OOP makes a huge amount of sense.
OO code may have more overhead to write (it does) but I find it much easier to read good OO code now that I understand the model, than I ever found procedural code even as I grew comfortable with writing it.
As a number of others have mentioned, OO code is about architecture and that seems to be what matters when you're trying to grok other peoples code. The specifics of how they did it only matter when you start debugging or optimizing. (In my limited experience to date)
I look forward to really getting OO, writing lots of code, and then starting to climb the functional programming mountain. But the poster is right, start bare bones, start simple, and build from there!
I think that a language for beginners should be as paradigm agnostic as possible. Lua does not force OO or functional programming upon the user. And unlike Ruby, there aren't a bunch of list-like datatypes like lists, arrays, and hash tables to keep track of. There is just the table datatype which takes care of all of those.
Lately I lament that Python, Perl, Ruby and PHP are the dominant scripting languages instead of Lua. None of the former really justify their extra complexity, IMO.
I think this is a bit silly. The problem isn't OOP, it's how OOP is taught, and how OOP is abused.
To be honest most programming education is so terrible that it will still come down to who has the talent and passion to figure things out on their own, that's been true completely orthogonal to the presence or absence of OOP education.
As a new programmer I felt blessed when I learned OOP.
I could actually write long and clean pieces of code, without spending double time on restructuring them. Clean and coherent structure is important for understanding, while understanding your own code, and remembering what it does, creates a more fruitful development. Since new programmers often work iteratively by adding new features to their code, this is rather positive, than negative and that extra learning effort is always rewarded.
By knowing OOP you can access other's libraries in the language you recommend. By using other's libraries and by altering them, you can create more cool stuff faster.
Or you could use a language that uses object orientation in a more natural way, such as Ruby and Scala, which both are fundamentally OO languages.
You can do procedural-y things with both languages, much to their credit, but packaging up functionality is both natural and intuitive.
Over building APIs is not a hallmark that Objects are bad. They're a hallmark of bad design.
So, yeah, i agree, prototype a minimal solution and iterate, and teach others to do this. This is an issue that is orthogonal to object orientation. At least, it is in languages that don't make you jump through weird hoops (C++ & Java, i'm looking in your general direction).
The delta between Python and Ruby is way, way smaller than the Ruby community seems to think it is. You are aware that everything is an object in Python, too, right? The idea that that isn't true still seems to be knocking around the Ruby community. Everything he said about Python applies precisely to Ruby (or fails to apply in exactly the same way if it is a bad argument), because there's hardly any difference that matters to a newbie.
"But, but, len is a function! And you can't monkeypatch the base classes even though you can monkeypatch everything else!"
Yes, please, by all means burden your neophyte with that tirade. They'll really appreciate it while they're trying to figure out what an "if" statement really does or why calling a function with the wrong capitalization doesn't work.
If you don't buy it then don't get distracted by it.
In Ruby's REPL, i can interrogate any object for the methods it has on it (unless it's using method_missing in horrible ways), effectively allowing me to list out it's API.
As far as i'm aware, that's not the case in python's REPL, or at least, i was never able to figure out how to do it (i will fully admit to not being a python expert, but at the same time i'm not a n00b either).
Also, please note, i'm not dissing Python, i like python (just not as much as i like Ruby).
I stand by my argument about OO. It should be possible to quickly and succinctly describe what OO is supposed to do, and demonstrate how your language uses them to bundle up functionality. If you can't do that, you're doing it wrong.
Objects should not get in the way of being able to write code.
Lastly, please don't put words in my mouth. I don't think monkeypatching is relevant to the discussion, and the fact that len is not a method on objects is weird, but ultimately inconsequential.
In a python repl dir(object) and help, help(object) are what you are looking for. Both these functions are in the default namespace
dir(__builtins__)
Returns a list of strings for all the names bound in the prelude namespace.
dir(list)
Returns a list of strings for all the methods on the list class.
dir([])
Will return a all the methods on a particular instance.
etc… the same caveats apply to dir with getattr etc as to method_missing. additional caveat i dont care to test; it may not work on an old style class instance.
help has its __repr__ defined to tell you how to use it. It will allow you to read the doc strings for a namespace, object or function. However i prefer to run 'pydoc -p 8888' in my projects top level; this will start an http server on port 8888 that lets you interactively browser your projects docstrings and all the modules on its pythonpath.
Better yet, use the help() function instead, which also displays the docstrings for said object and functions (assuming the programmer gave those, which fortunately holds for the entire stdlib).
> You are aware that everything is an object in Python, too, right?
Python is missing encapsulation and message passing. It's object model seems to be similar to something like PHP or JavaScript: objects are essentially hashes. In practice, that's not a huge deal, but they aren't quite the same.
This is not important for beginning programmers, but for programmers thinking about their toolset more information can only help.
"Any traditional OOP programmer might tell you that essential elements of OOP are encapsulation and message passing.
The Python equivalents are namespaces and methods.
Python doesn't subscribe to protecting the code from the programmer, like some of the more BSD languages. Python does encapsulate objects as a single namespace but it's a transluscent encapsulation."
TBH, private fields are a stupid idea. It's just a cutesy concept that gives the newbie a false illusion of being in control. I've never seen a program where a private field was actually important to the integrity of the program.
I'd have to politely disagree; if your class has mutable state and you want to maintain some invariant relationships amongst the elements of your class then as far as I can see you really need to be able to specify that some operations can only be performed within the class. If your class is immutable then I'd agree that in most cases privacy offers you few benefits.
Yes, that's the theory. In practice it doesn't seem to work out so well. Neither the promised benefits of using private fields actually manifest, nor the promised costs of not having private fields, but the costs of actually having the private fields manifest in spades. In cost/benefits terms, the benefits are almost entirely theoretical and the costs much higher than advertised. I consider them a major net loss for OO.
I wouldn't really agree that it doesn't work out in practice. For the systems that I work on, the concept of privacy is extremely useful, perhaps we're doing things 'the wrong way' to believe that but I don't think that's all that likely.
What would you suggest as a better alternative, outside of a purely functional approach? Just as an example, suppose that Java's LinkedList class didn't have a cached size and that instead it went and iterated its elements to calculate its size each time someone asked for it. I might write a wrapper around LinkedList with two private variables, the LinkedList and an int to cache its size in. Then I would update the int each time an operation was performed on my member list. If I can't make those two members private, how can I ensure that no one accidentally updates the list without updating the size, or vice-versa. It mightn't be an unreasonable mistake for someone to think calling mylist.size = 0 would empty the list. What is the 'safe' way of constructing this concept?
> If I can't make those two members private, how can I ensure that no one accidentally updates the list without updating the size, or vice-versa. It mightn't be an unreasonable mistake for someone to think calling mylist.size = 0 would empty the list. What is the 'safe' way of constructing this concept?
Conventions. In Python, for instance, all private fields and methods start with a double underscore, i.e., '__'. This is merely a convention, outlined in PEP8. Documentation tools don't pick up "private" fields. Even when they do, they don't list them in the same section as the public fields.
You could argue about the safety of this approach, but at some point you have to let go of the training wheels and let programmers make their own judgments.
I think the point is precisely to take design entirely out of the picture when you're teaching a new comer about programming. The fact that you have these problems and you can write executable recipes to solve them is already more than a mindful, everything else only confuses.
Reminds me of my AP Computer Sci class in high school a decade ago.
There were five people in the class, two of them being my brother and I. Two of the three left had zero prior programming experience. I could only sympathize with them as we rushed through the "syllabus" learning about everything from pointers to classes to inheritances.
Suffice to say they were completely lost and dreaded the class as much as math. I wasn't particularly good at math and could not imagine seeing programming as "math" because of how much fun I was having with VB/ASP(yes, you can chuckle) at home. I don't even want to think what perception of programming the two people left with.
I recently finished college with communications degree, make a decent wage doing programming and haven't needed to even think of the word "inheritance." Of course there are many great uses for it.
But to assume everyone should learn about it is to assume that everyone wants to be a genius programmer at google. There is so much gray in between the curriculum just ignores.
I still get a chuckle out of it, too. Then again, how else do you end up in a debate class full of basketball players who are gonna win the NCAAA Championship later in the semester?
This is very true. I write most of my Python code using only procedural/functional constructs and limit the OO parts to a minimum and that's the way I want to do it.
Now, I haven't known exactly why I want to do this, except maybe that OO constructs soon turn my code ugly but that's a gut-feeling argument which I've kept to myself. Maybe I've thought that I've unconsciously absorbed from the athmosphere enough of that Java-hating mentality to let it show up in the way I write Python, too. But maybe I've been on to something as well.
I'm glad to read about someone else thinking in the same way.
Modularity is a fundamental design principle that pre-dates computer science: complex systems should be broken down into independent components that interact through formal abstractions. It's very simple and obvious. Everyone understands it on some level.
OOP is really just taking the principle of modularity and making it a first class language feature. There's no reason not to teach it to a beginner. They are going to be using the concepts right away anyway.
Also, OOP is ubiquitous in the real world and rookie coders need to get involved with the real world as early as possible.
If you build a house you dont need worry about the inside of bricks. Objects dont have that property, Pure Functions do.
I see this exactly the other way around. Objects makes sure you don't have to worry about what's on the inside, but if you use pure functions you need to know how the function is implemented, how it processes data, how it expects input to be and how the output will be.
Using pure functions requires you to seek out lots of information and verify implementation. Not that you shouldn't investigate this when using OOP, but in OOP a lot more of this is formalized in class-definitions IMO making the interaction easier to understand.
The way I see your analogy, is that if you use pure functions, the electricity in your house needs to know what material the bricks are made of so that current doesn't accidentally leak and cause a fire. With OOP this is a detail you don't have to worry about as long as the pieces fit.
If that sounds somewhat flawed or too simplistic, I will take the liberty of blaming the poor metaphor. I'm just pointing out that I see the exact same metaphor in the exact polar opposite way ;)
Why would I need to know the implementation and how it processes data?
The entire point of pure functions is that you don't need to know. Referential transparency guarantees that a pure function given inputs X and Y always returns the same output Z. So if I just follow the interface specification about accepted inputs and accepted outputs I can switch in any arbitrary implementation of that function and have my code keep working.
So if I just follow the interface specification about accepted inputs and accepted outputs I can switch in any arbitrary implementation of that function and have my code keep working.
The funny thing is that this is what I would say about OOP, while for functional solutions I often find the specifications much looser, so that I cannot trust this to always be true.
I'm not saying you are right and I am wrong. But I am saying that the arguments about what holds true for what, and especially the arguments for "OOP is bad" or "FP is bad" etc, they just seem to be highly subjective.
And unless someone can come up with empirical evidence to show that one side is actually better/right, I find this debate both pointless and rather amusing. Pointless in that it doesn't give more insights. Amusing in the sense that it makes people reveal lots of the preconceptions and prejudice against things they seemingly don't fully grasp.
What how do you have to know about the implementation of the function? Think of something like quicksort or a partiton function (partition 2 [1 2 3 4]) => ([1 2][3 4]). How do you have to know anything about that functions implemantation. Sure if the function has a name like myfunction and no docs then you have to look at the code but thats the same with OO.
In clojure you can say (doc anyfunction) and you will get a description of what the function does and it does only that.
You describe the perfect case in OO where you have only objects that only have immutable members and pure methodes. :)
I n the realworld I you often see that something does not work anymore because im some other object some variable changed or that it worked first but after a variable somewhere changed it breaks. With inheritance adds to that a hole set of new problems.
I'm not saying you cant make a good design with object im saying with a FP stile its easy to do the right thing while with objects its harder to do the right thing.
Think of something like quicksort or a partiton function (partition 2 [1 2 3 4]) => ([1 2][3 4]). How do you have to know anything about that functions implemantation.
Well. I need to know that it accepts two parameters, first being an integer, the second being an array of integers.
In an OOP solution this would at least be exposed by type signatures, something you don't always see in FP solutions (often due to type-inference). Hence you need to check the implementation.
And this is for a simple example. What about more complex example? Where the input-data has a more complex nature? Take the following example:
var data = [ { id: 1, value: 2 }, { id: 2, value: 3} ];
var ordered = orderByValue(data);
Ignoring the "var data ="-line: Without checking the implementation, how would you know how the input-data should be formatted? What types and properties are needed, and in what format the function accepts the data? A seq? A list? An object with properties? You don't.
In C# the same function would probably be contained in a relevant class and have a signature akin to the following:
Now I know what it returns, what it expects and don't have to worry about that. The type-signature tells me everything. The types it expects tells me everything. Moreover, this is probably already implemented on a specialized collection class, so all I need to do is:
var data = new ValueHolderList();
// populate
var ordered = data.OrderByValue(); // notice -pure- implementation in OOP ;)
Again. The implementation tells me what I need to know. Details are blackboxed, abstracted and objects easy to work with. I don't have to worry about functions, context, what they expect and in which order the glue is expected. In FP you are more commonly exposed to the internals of things and need to figure these things out yourself.
I'm not saying I am 100% right and you are 100% wrong. I'm saying there is lots of grey here which this thread doesn't really seem to cover or acknowledge.
FP is not a silver-bullet and nor is OOP. FP has strengths. So does OOP. Lots of the "weaknesses" I see people complain about with regard to OOP here are what I consider weaknesses in FP and strengths of OOP.
I sometimes wonder if we are living on the same planet.
> Without checking the implementation, how would you know how the input-data should be formatted?
How does OOP solve this problem? Either way, you need to check the type signature of the function. If your IDE does that for you, that's great, but it's not an inherent difference between OOP and FP.
In both OOP and FP, to know what a function (or a method) does, you need (at a minimum) to check its type signature. There are _many_ OOP languages which do not use explicit type signatures, and many FP languages which do; it seems to me that you have confused the OOP vs. FP issue with the issue of explicit vs. implicit typing, or perhaps with static typing vs. dynamic typing. There are languages available to suit pretty much every combination of those properties, so you can easily avoid whatever you don't like.
Further, when you check a type signature, what you read is much more valuable in a pure FP context than in a typical OO context, because OO languages generally (1) allow and encourage the use of state, and (2) do not distinguish in the type system between functions/methods that use state and those that don't. The type signature of a pure function strongly constrains what that function can actually do - so much so that it's possible, and effective, to look up the function you need just by specifying the type that you expect it to have. In a typical OO language, the type signature indicates much less about a method's behavior, because its inputs include not only the parameters you provide, but also the entire "world" at the time it is invoked; similarly, its outputs include the entire "world" in addition to its return value. As an example, a pure function that takes no parameters can only be a constant, but an impure method that takes no parameters could play a song on the speakers, display a window on the screen, or launch a missile.
FP and OOP certainly have both strengths and weaknesses. I suspect one reason that OOP catches so much flak around here is that people are more familiar with it, and the flaws in tools you're familiar with are easier to see. Unfortunately, when you're not familiar with a tool, it can also be easy to see flaws - flaws that aren't really flaws at all, but simply aspects of the tool you don't yet understand. The result of this, I think, is that one should ignore criticisms of OOP from people who aren't deeply familiar with it, and similar for shallow criticisms of FP. Unfortunately, there are a lot of both on Hacker News.
Look up blog posts by Mark Guzdial. He's a very thoughtful, reflective CS educator who has written quite a bit about students learning to program.
I can't remember where he wrote about it, but apparently, 1) students can use other people's objects just fine, if they are simple (and so from that perspective, OO is fulfilling its general promise), but 2) new students have a nightmarish time trying to both solve new problems AND decompose problems into objects at the same time.
Which is to say, there is absolutely a reason not to introduce this to beginners. It makes them fail and quit.
Incidentally, the same argument could be made for the concepts of abstraction involved with composing new functions - new students can call existing functions just fine, but the skills involved with abstracting a problem, and replacing a pile of copy/paste with a parameterized solution, is apparently a more taxing set of skills than you might think.
I didn't even know this was a big, controversial issue still. Coming from a PHP, Ruby, and Obj-C background (and having taken C and Java classes in college), I always saw it as "use objects when you want things to be easy, use C when you want things to be fast".
Of course, being primarily a web developer and not having implemented a linked list or a bubble sort since college, the anti-OO people probably wouldn't consider me a "real" programmer anyway.
This is a serious question: is there ever an occasions when someone working on a real-world project should be implementing linked lists? I've no experience with C, but surely this stuff comes from libraries now?
The C world has various reusable linked lists, but yes, there is occasionally a reason to implement one yourself: a {data, next} tuple usually fits in a single cache line, whereas some libraries only offer {data pointer, next} tuples, which requires loading in the memory that <data pointer> points to. The performance difference can matter.
That said, some more agreement on reusable code may be useful, especially for things that aren't as easy to implement as linked lists.
This OO complexity becomes more pronounced once you start talking about serious persistence, which today, likely means SQL.
Has the object/relational problem been solved? Last I checked, it's still a confusing and complicated inelegant hack-job to reconcile inheritance/polymorphism vs. relational storage schemes.
I just think of it in terms of OO is important for the design patterns. That's when I'd 'use it.' I wouldn't bother trying to explain the theoretical underpinnings to a beginner - the definitions passed around in these types of debate are hilariously confusing.
The suggestion is that thinking in OO terms makes you think about architectures instead of programs, and the result is to move you away from thinking about the problem and the solution and towards thinking about the program's organization. This is always assumed to be a good thing, especially when accompanied by the usual examples of writing programs for teams of programmers with varying levels of skill.
But if we grant that non-OO is better for someone learning to program, why wouldn't it be better for someone reading a program for the first time?
EDIT:
Thinking further about this, I am not against the idea of design considerations that should be kept from the beginning programmer. But OO isn't really a design consideration, it's a metaphor.
If it really "worked," then new programmers would be looking all confused trying to write a Towers of Hanoi program, and you would explain, "Think of each tower as its own thing. What can it do? What does it have?" And slowly you could tease out a program by designing objects from the ground up.
But if it actually doesn't work to teach programming using objects, if you teach programming without objects and introduce them as an 'advanced' subject, then at some level you have to wonder if the metaphor is fundamentally broken. If the purpose of OO is design and organization rather than a fundamental way to think about programs, then really we shouldn't say things like "Everything's an object."
We should ask what we need to do to design well-factored programs that are cohesive without being coupled and then design language features that directly address those organization requirements rather than thinking that there is this obvious "metaphor" that naturally leads to well-organized programs.