Can I ask a serious question? Why is there an argument about this every other day on this forum? Do you guys honestly think that the language or paradigm you choose is the most important decision you can make? I'm of the belief that programmers are programmers. Procedural, OOP, or functional, doesn't matter. What matters is the ability of your team to understand and build solid software from it. Can the tool accomplish this? Good, then appreciate the fact that other tools can also do this. Is your hammer the best? Yes or No is an opinion. There can be other hammers. And people using them aren't idiots. A bad worker will screw up even with the best tools. And a good worker will create the best software with even the worst tools.
I just feel like this entire industry gets caught up in trivial matters. What language, what paradigm, what editor. Not that these aren't valid decisions that need to be made but these are problems I can solve in a few minutes or a few hours of thinking about it. The big problems I face are people problems. Miscommunication. Lack of accountability. Developers going rogue. Management not understanding. Users not being trained properly. I could go on and on and I may sound like a grumpy old man. But I just do not get it. Choose your platform and be okay with it. Understand there are others. We don't need one platform or paradigm to produce great things.
This industry has a tendency to make a religion out certain tools and techniques ... IDE people don't ever think about the possibility of working with a simple, yet effective text editor. People that have been doing OOP for the last 10 years won't even look at functional programming techniques that have been know for decades, with firm theoretical underpinnings and still continue to produce hundreds of PeopleDaoFactoryFactoryImpl every day.
The biggest problem is that, as a developer, you simply cannot pick the right tool for the job, because you're not doing your own thing as a solo developer. You're always part of a team. Even when starting a business, you probably need a good cofounder or first employee - finding a smart and dedicated individual is hard enough, while also having preferences about the right tools and techniques is a bitch.
So naturally there is a backlash ... some of us hate IDEs, some of us hate the noun-centric and problematic designs that we see in large Java applications, some of us are aware that there's a multi-processing revolution coming and that mutable state does not scale, etc. And if you work in a team that gets things done, you're really lucky and blessed.
But how can we avoid getting stuck in the status quo, other than expressing our educated opinions on it?
And btw, I actually love OOP, but as a tool, not as a way of thinking about the problems I have.
I find it amazing you talk about mindsets that keep us in the status quo, and then you disparage IDE users and write a post full of hacker news groupthink regurgitation. I've noticed the type of person who hate IDEs tend to fall much more into the religious and close-minded crowd than those who don't. Most people I know who use IDEs are perfectly competent at an editor like emacs/vim and simply prefer different tools, but that doesn't stop many people who prefer text editors from trying to feel superior by stereotyping IDE users as paint-by-number morons incapable of embracing the beauty of the command line. It's a traditionalist and condescending attitude that I think is holding us in the status quo more than something like OO's popularity, because there are all sorts of cool programming language ideas you can come up with if you're willing to sacrifice the source code being easily edited by a text editor. Yet we see far less experimentation there then we do with functional programming languages, and I think that's because functional languages are perceived as cool and smart but IDEs are associated with the philistine class of programmers.
1. I hate IDEs for concrete reasons, like being impeded to work with tools for which your IDE does not have a plugin and sometimes it happens even for really mainstream technologies ... how's the C++ support in IntelliJ IDEA these days?
2. Switching between IDEs and editors is a productivity kill, especially if you do that switch a lot - instead of being a creator that bends the tool to your will by customizing it to suit your needs, you're going to be just a casual user that cluelessly clicks around.
That's not so bad, however to be good at what you do you need a certain continuity in the tools you use, otherwise instead of learning about algorithms or design tricks or business, you'll be learning about tools all day; and unfortunately this cannot be applied much to languages and libraries, because these are optimized for different things - although if you worked on the same CRM for the last 5 years I guess it's not that important ... and I don't know in what groups you hang out, but an IDE user that switches a lot or that is familiar with grep/sed is a rare occurrence in my world
3. I love Smalltalk-like environments where the IDE is part of your virtual machine and can see and work with live objects and continuations, but get over it, because your IDE is not like that - yes I would love to escape a little from the text-based world we live in, however the current status-quo of IDEs is still text-based and text-editing isn't even something they do efficiently
4. HN groupthink should be natural, because it has attracted users with similar interests; that's not bad per se, considering that HN users are a small minority and not necessarily because we are smarter, but because we have slightly different interests ... also, I don't see much evidence of groupthink because I always see both sides of the coin in conversations here (you're disproving your own point right now)
5. I never implied that my opinions represents THE truth and I like engaging in such discussions ... instead of reading about the same old farts coming out of tech-darlings of our industry, because in these conversations I might actually learn something
...Surely, you realize the irony of your position, no? One could easily replace "IDE" with "Programming paradigm [X]" and you'd suddenly be the exact person you were railing against in your original post..
How about this.. both have their merits..? Static typed languages do benefit from a good IDE. That said, I personally prefer the cleanness of Sublime Text over a proper IDE -- even at the expense of having to write my own getters and setters! Doesn't mean the other is antithetical to productivity.
Let's end this senseless arguing and just agree that PHP is terrible.
The even bigger irony is that people using terrible languages and paradigms, like PHP and the original Visual Basic, have historically gotten things done, even if that meant shoving a square through a round whole :-)
I guess the curse of "enlightened" people might be that we think way too much about such things.
I think PHP and the original VB were actually great platforms. At the pure language level they were not elegant (but good enough), but both were not just languages, but a platform which as a whole were great for developing a specific kind of apps.
For developers, tool problems are people problems.
Can I get quick answers to questions about this language? Are there plugins for this IDE for the kinds of programs we write? Are there mature, well documented libraries for our problem domain?
For languages, especially, the culture around the language is probably more important than the language itself.
So arguments about tool sets are indirectly judgments on entire programming cultures, and humans are pretty passionate and defensive when they feel like their cultural values are under attack.
Then don't do it. Pick the tools you like and proficient in and go wild with them. But please don't preach it day in day out on how horrible it is to the rest of us. This is what is happening here. We actually found OOP very useful.
I completely agree with everything said, but I also think that conversations about languages and paradigms are worth having.
Yes, it's true that you can solve the decision on what language or paradigm to use in a few minutes or hours, but your options are not only limited to those that you understand, but are limited by the depth of your understanding.
A lot of the time the decision is likely nothing more than personal preference, but if there is a chance that one paradigm will work better for solving some problem or class of problems; I'm interested in hearing about it
Arguing (or watching people argue) trivial matters gives my brain a break from the serious things I would otherwise be working on. That's probably what drives most people into these discussions and onto their soap boxes too. It's a lot more fun than trying to figure out why my app can't find that damn configuration file.
I think it's because the different platforms are incompatible, and a large compatible community of software is a value multiplier (both in terms of the longevity and the usability of your code). You want people on your platform. Similarly, it takes time and energy to master a new methodology, and that's time/energy not spent if other people work the way you do.
To be sure, it is all bits and most languages are capable of rearranging them as needed. But paradigm is important. If you start every solution by asking 'What classes am I going to write?', your ratio of typing to functionality is not going to be very good.
I had a good chuckle at the end of the referenced material
"The object-oriented programmers see the nature of computation as a swarm of interacting agents that provide services for other objects. Further, the sophisticated OO programmer lets the system take care of all polymorphic tasks possible. This programmer sees the essence of object oriented programming as the naive object-oriented programmer may not."
Hey Dan! What wrong with the program?
Sorry, my swarm of interacting agents had a polymorphic pile-up on aisle 7. Dangling pointers everywhere. It's not pretty.
Snarky jokes about buzzword soup aside, I love OO. We simply need to be aware that OO lets us "play" at building complicated things when 1000x simpler solutions may be available. OO works best for large-scale, lots-of-people projects. A lot of business projects are like that. Many personal and startup projects are not. The trick in loving any particular tool in the toolbox is knowing when not to use it. So the example is a little bit unfair -- it's tough to create a real-world example program of sufficient complexity to use in OO examples. All the examples look like architecture astronauttery.
"OO works best for large-scale, lots-of-people projects."
Nice! But, I think it is the other way around. Throw a lot of people at something with some software architects and you'll probably have a large-scale, lots-of-people, "OO" project.
I worked at a company that was between small and mid size and had ~150 developers. We bought another company that basically did the same with 10x fewer, and that group won out over ours. We had some excellent developers that went on to other excellent shops, and I was proud of the work that we did there and learned a lot about process. It was the best run development team I ever worked for and probably ever will.
I was an "OO" developer, now I'm just a developer. Not because of that learning experience, but because I found Ruby and I no longer see the benefit in intentionally writing overly large applications. Ruby is truely more OO than Java, imo, but I don't write like I used to which I think is what is being called "OO" (lots of packages and interfaces, pattern usage, lots of maven projects).
I once had a heated argument with Rob Pike over lunch about Java when I was a naive grad student; suffice it to say that I thoroughly got my butt whooped on inheritance. My argument was, I think, that inheritance basically is composition, just with some self recursion thrown in. Keep in mind that programming is always about composition, and we are just arguing about different styles of such.
OOP has recently been thrashed in the mud in the academic community, where it was never completely accepted. Now, industry always loved OOP, not because it was new, but it provided some stronger guidance on what they were already doing (composing software out of stateful parts), and was much more pragmatic than its older more aloof sibling (FP). People were already thinking in terms of objects probably even without using Simula, Beta, Smalltalk, C++, etc...vtables were even already being crafted like crazy in C.
I agree that object thinking is just another tool in your bag, sometimes you really need lambdas and should use those. Sometimes you want raw tables. Any program worth its salt is going to incorporate many different styles, and avoiding one style on ideological grounds is ridiculous.
I think inheritance basically _is_ composition plus self-recursion.
You get into trouble because of the self-recursion. Any time a base class method calls another base class method, that call is part of the class's interface, because when you extend the class, the call will be redirected to the subclass' method.
But how many base classes have documentation for every self-call that can be redirected in such a manner?
> You get into trouble because of the self-recursion.
Which is why (savvy) object-oriented programmers have largely moved on to interface polymorphism. Inheritance polymorphism has its uses, which is why there hasn't been a whole lot of effort to rally behind languages that lack it, but the general consensus is that you should really only use inheritance when inheritance is what you really want to use.
Unfortunately it's true that there's a whole lot of OO code out there that was written before people learned this lesson. And OO's popularity with pragmatists combined with its relative lack of popularity with academics means that a lot of code doesn't reflect lessons related to the pitfalls of inheritance polymorphism that academics figured out a very long time ago, such as the Liskov Substitution Principle.
My understanding is that composition can be changed dynamically whereas inheritance cannot. It has always seemed to me like composition is more flexible.
To give concrete examples: using composition, if instance A has a B, then at runtime you can replace the pointer to the B with a pointer to a C such that now A has a C. You basically change the type of A by changing where messages get sent / delegated.
With inheritance, you'd have B is an A and C is an A, and the relationships here are static unless you start messing around with reflection and dynamic class loading and stuff.
The tradeoff for the flexibility of composition is more verbose code, I think.
Dynamic inheritance is not unthinkable, I've used it in my languages before (or see research languages like Cecil). Of course, you can do this easily in dynamic languages like ruby.
Actually, can you really do dynamic inheritance in ruby? I don't _think_ so. There are ways to apply inheritance dynamically at runtime of course (including with module mix-ins, which are basically just inheritance even though ruby pretends it isn't), but I don't think you can _undo_ inheritance at runtime.
You can easily simulate dynamic inheritance in ruby.... with composition, using delegate-like patterns.
but I don't think you can _undo_ inheritance at runtime
I'd be surprised if you couldn't do it in Ruby. You certainly can do it in Perl because it uses a package (class) variable called @ISA for it's inheritance lookup.
And because package variables are dynamically scoped you can do this:
{
# remove everything except father from inheritance
local @Some::Class:ISA = $Some::Class::ISA[-1];
$some_object->foo; # finds father foo() only
}
$some_object->foo; # runs first foo() found in inheritance
I think OOP really took off in industry because it was easy to sell third-party modules. You could "plug in" this module that you purchased and it was easy to hook up. Markets have a way of doing that: the solution that wins isn't necessarily the "best" solution but the one that's easiest to sell.
I don't think the component revolution has happened. We got frameworks to be sure, we even got...gasp...libraries with our languages. Maybe for that reason, OO languages (Java/Python/Smalltalk) were more likely to come without their batteries included. I'm guessing inheritance helped out a bit with that.
But I don't think objects are really especially about third-party reuse or even any reuse at all, but they are more about enabling easy problem decomposition (i.e. break up your problem into a bunch of interacting objects).
OOP definitely has something about it that favors reuse of code.
I think you got it backwards, it's decomposition that it has a problem with - it's not easy to point fingers for exactly why that is (it's probably because of all the mutable state, which leads to entanglement, where components only seem independent of each other, when in fact they aren't), but you can find anecdotal evidence of this happening in the wild ... look at frameworks like Django and Rails, with tons of reusable plugins available and yet a humongous effort went into Rails for making it modular (e.g. such that you can import parts of it, like ActiveRecord, in other non-Rails projects, or for easily replacing ActiveRecord with something else), while Django never achieved it.
>Maybe for that reason, OO languages (Java/Python/Smalltalk) were more likely to come without their batteries included.
Surely you mean "WITH their batteries included"?
For this is the very situation in Python (and it's slogan in fact), and of course Java has the most extensive "included batteries" in the form of the JDK API than any other language.
I still believe that the book which taught me the most useful lessons about object oriented programming was Code Complete (first edition).
It was written before OO programming was popular. The concept is not described there. But if you've read and understood its description of things like abstract data types, it is obvious where and when OO is going to be an extremely good hammer to use. And - just as important - you're not going to wind up endlessly searching for nails for your OO hammer.
When I see things like http://www.csis.pace.edu/~bergin/patterns/ppoop.html it is clear to me that someone does not understand the value of simplicity. I don't care what complicated theories you have about what kinds of code are easier to refactor. Less code is generally going to be easier to change later. If need be you just rewrite it.
> Less code is generally going to be easier to change later. If need be you just rewrite it.
That's in general an unhealthy attitude, because later might be too late to rewrite it, as complexity has a way of creeping in, as simplicity requires eternal vigilance and leadership with an iron fist, something which most teams lack.
You should watch the Hammock Driven Development presentation, by Rich Hickey, in which he makes a case for the value of thinking about the problem before acting. This man is in fact brilliant in how he delivers presentations, so while you're at it, watch Simple Made Easy, in which he argues that simplicity ain't easy.
TL;DR - easy to write code is not necessarily simple. But simple is objective, so you know you have it when you achieve it.
That's in general an unhealthy attitude, because later might be too late to rewrite it, as complexity has a way of creeping in, as simplicity requires eternal vigilance and leadership with an iron fist, something which most teams lack.
If you want to assume incompetent programmers, then I agree. If you assume competent, experienced programmers, there is constant willingness to say, "This worked here for a bit but it is time to sit down and do it right."
I've had the pleasure of working with the latter. If you have that pleasure, then a willingness to find a simple solution where it makes sense pays off in spades. And it it not unhealthy.
But there is a definite element of, "If you don't know what you're doing, this is a principle that can let you make horrible decisions without realizing it."
Another reason why simple code is not always better is that in a typical project there are LOTS of simple decisions that need to be made. If you let them for later, they will become: hard to find, hard to integrate with other "simple" decisions you made, etc.
Indeed and that's because in order to tackle complexity and scale the development process, you need layering, composition and to also avoid cyclic dependencies and complecting too many things at once.
The Unix philosophy, in which things should do one thing and do it well, is a good case-study of good design, with the ugly parts being the instances in which this philosophy wasn't respected (not mentioning all the wheel reinvention going on).
However it's not easy to build things that do simple things, do it well and then building on top of that. You need experience, forward thinking and resources.
And OOP sometimes helps, but sometimes it makes things worse. For instance it encourages a bottom-up design. But other times a top-down process for development is better - in which you start at the top by outlining/creating a domain specific language and then implement the layer below that knows how to communicate with that language, then rinse and repeat until the layers are as simple as possible and you need no more layers and the implementation works.
Hammock Driven Development is a create way to reinvent things from scratch every few users, building elegant systems that don't ever grow to solve large problems.
Code Complete was a huge influence for me, but the book (understandably) focused on procedural modules and abstract data types. I still refer to some of the book's cartoons about information hiding when I think about code. :)
The book that brought me OO enlightenment was Eric Evans' Domain-Driven Design: Tackling Complexity in the Heart of Software. The book is quite dry and, in the small, it seems to describe nothing new, but by the end it was quite deep.
Rob Pike is very smart guy, but he works in relative isolation from other programmers. He usually creates his own tools and works with people that are like minded. That is why he thinks it is so easy to just go directly to the code and achieve what he wants writing the right function to solve the right problem.
OO was created to deal with the general issue of organizing lots of code around a reasonable design. It is a tool for industrial level programming, where there are thousands of programmers, many of them with bellow average skills, contributing to a single codebase. In that aspect I think OO has been very successful, because it provides a framework to simplify design decisions.
I would be very proud - and I suggest most people should be - if I had written so many complete, useful and novel tools for myself (and others). It shows great proficiency and understanding of how to get things done. Which is arguably the most important skill a programmer can have (read Linus' C++ rant too ...).
Personally I think - and that mindset was to a significant part shaped at University where this was a bit of a running gag in the 90's because the notion was actually taught - that OO was simply an attempt at getting people who have a more dominant right half of brain ("less intelligent people" if you forgive the conclusion) to structure their code better, because for some reason working with OO patterns and methods supposedly appeals to that half more.
In other words, OO was inflicted upon us to get dumb people to write code too. Yes, it's offensive, but ... make your own observations.
I was under the impression he worked in a relatively small team working on code thats used by lots and lots of people, but only written by a few people.
The OP's point is that when you have people you dont trust working on your code (and does he really?) there need to be controls somewhere to keep everything from getting screwed up.
OO is one way of doing that; a good set of unit tests + CI is another.
I work with him at Google. Google has a single codebase shared by tens of thousands of engineers. People have areas of that that they own, but it is a wrong characterisation to say Rob "works in relative isolation from other programmers".
At Google, how large is the Go community compared to the C++ or Java community, or the community working on a shared application code base like the Search engine?
Reminds me of Linus Torvalds. He works on top of a kernel he has (initially) written himself, using a VCS he has (initially) written himself, and using an ancient heavily-customized editor [0,1]. Somehow his tools are more successful than Pike's. However, I doubt we could convince Torvalds to create a programming language.
OOP vs FP is better seen, I think, as a duality rather than as a dichotomy. Sort of like wave-particle duality. Sometimes you find it convenient to think "wave", and at other times "particle", but the reality of the system is neither. These are just convenient and equivalent constructs we use.
Some other such dualities are - code vs data, data structures vs algorithms, closures vs objects. Enlightenment lies in seeing the false nature of these dualities. (Now, say "Om" people :)
For another fun view on data structures, checkout the "numerical representations" chapter of Chris Okasaki's "Purely functional data structures" [1] where he draws parallels between number representations and data structures, which I found fascinating.
I can only read it as a joke -- the title is almost 'poop' and it's insane to write all the classes instead of the initial few lines. But I really see that one of the authors has more 'OOP forewa' articles where he's fully serious:
Actually, I have an impression that I relate very good to most of things Perl, I use it very often for small programs. I believe that most serious Perl people have a sense of humor, if you know what I mean. And exactly because of that bias, I've first believed that the article absolutely must be a very successful joke, and didn't understand why Rob isn't sure. Only after seeing the rest of the material, I wasn't sure myself. Maybe somebody should actually ask the authors.
Do read the paper! Note that I would consider an elegant solution this (string literals replaced with the names of them):
The big problem with the ppoop paper isn't that OO is bad, it's that it confuses OO vs Procedural with YAGNI vs Extensibility. (also, both the "hacker" and "OO" solutions are lame as OP points out)
If you actually have a reason to believe you need a gold-plated general-case OS identification system then throwing all those patterns at it is no worse than the nested-if spaghetti procedural code that would be the naieve procedural solution.
But in both cases it's just a stupid answer to a stupid question ("How do I overengineer a string->string table lookup?")
You're dressing up the scotsman here (the tell being "it confuses OO ..."), and I think completely missing the point of the paper.
Obviously truly complicated systems will need complicated solutions, and OO has some not-completely-insane things to say about solutions like that.
But the real world doesn't see things like that very often. Real world programming is made up of thousands of tiny problems not altogether unlike the hack shown here. And real-world OO nuts, faced with these real-world problems, tend to solve them badly.
So sure, "How do I overengineer a table lookup" is a dumb question, but that's not the question posed. The question is "How do I avoid overengineering a table lookup", and the answer is "avoid OO".
But the right answer isn't "avoid OO", at least not any more than the answer is "avoid if statements". The answer is to use the library function your language already provides to solve the problem in front of you and get on with your life. This is something applicable to any paradigm or language.
After reading the article referred to by this, I just had to do this. You know that urge ;)
def osdiscriminator(string):
good = "This is a UNIX box and therefore good"
bad = "This is a Windows box and therefore bad"
unkn = "This is not a box"
boxes = {"Linux" : good,
"SunOS" : good,
"Windows NT" : bad,
"Windows 95" : bad}
if string in boxes:
return boxes[string]
else:
return unkn
Easy to extend, simple and therefore maintainable without any inheritance or GoF design patterns. Oh, and about one minute of work.
Matt Wynne raised a good point in a recent talk about hexagonal (Ports & Adapters) architecture that I agree with.
People are exploring new ideas for building software. Why is that so wrong? Instead of attacking people for adding unnecessary complexity or doing it "wrong", why aren't we praising them for thinking about new solutions and approaches to problems in software?
Basically, we learn from the people before us. I've noticed this, and you learn through code-review that "that's a bad thing to do"; if you are lucky, then you learn interesting edge cases.
I think people learn what is "right" by a combination of cleaning up peoples crap and by dealing with their own crap, and the difficult thing is to purposely try "wrong" things and push boundaries.
Stephenson, G. R. (1967). Cultural acquisition of a specific learned response among rhesus monkeys. In: Starek, D., Schneider, R., and Kuhn, H. J. (eds.), Progress in Primatology, Stuttgart: Fischer, pp. 279-288.
Why is it OK to criticize OOP using non-pure OO languages like Java, C#, or C++? Pure functional nut cases like Don Stewart and Simon Payton-Jones (whose favorite pet example of a dangerous side effect is a nuclear holocaust) don't have to defend FP from critics whose only experience with FP comes from non-pure functional languages like Python, JavaScript, and now Java and C++. Yet here we are again, with another under-informed, overly-prominent person publicly airing his grievances with Java as if all OO languages bear some collective guilt for them.
Smalltalk is not a hard language to learn. Haskell is far, far more complicated. You can download Squeak/Pharo/whatever and learn the language in a few hours. Why in 2012 is Java still such a potent argument against the entire paradigm of OOP, both class-based and prototype-based? Why does OOP alone have to put up with this sort of scurrilous, intellectually lazy and dishonest propaganda against it?
I agree fully - reading through these comments is frustrating, as the languages mentioned can hardly be called Object Oriented.
Learning Smalltalk is something everyone who really wants to understand OOP should do. C++, Java and the like really should be called Class Oriented languages, as demonstrated by the often very deep class hierarchies.
It's not about the classes, it's about the communication between objects. Objective-C is far closer to Smalltalk than most of the languages mentioned in these comments.
The downside to learning Smalltalk - the realization that a 30 year old environment is more advanced than whatever modern tools you will need to return to to earn a living.
I don't get this hate toward OOP. OOP is just a tool to organize your code. Like function is a way to organize your code. When the whole program is 5 lines long, putting them in functions would be unnecessary complicate. Does one seeing that would conclude that function is unnecessary complicate in general?
My dislike of OOP is its emphasis on state. If you do foo.bar(), then you have changed the state of foo, even for other references to it. If you do foo=bar(foo), then the state of the original data is unchanged, while code below behaves the same way. In most cases, I find minimizing state minimizes bugs.
I do however like objects, and think that they can have a very good role in code.
There's nothing wrong with state. State is a fact of life in programming. Even pure functional program has states, which are the parameters passed among functions.
I guess you meant mutable state. You don't have to use mutable state with OOP - just create a class that allows state initialization in the constructors but nothing else, with none of the methods changing the states.
The given example is pretty damn stupid though. I can't believe anyone is taking a factory to return a string seriously. The procedural example isn't a solution either (maybe if you're stuck in the 80's).
Any "given X return Y" is a mapping problem, thus all you need is a hashtable and associated map function. It can be implemented equally well in ANY paradigm.
I think, OOP-polymorphism added some value to programming languages, people who criticise OOP (and not its more or less useful application), have mostly not understood it. One can use it where it's useful, elsewhere one can use better techniques.
Tools in your toolbox. Use them as you wish based on whatever criteria fits the moment and the project.
I made hundreds of thousands of dollars with a program I wrote in 8051 assembler. To be fair, it was part of a larger hardware solution. Still, the UI portion of the code was all assembly language.
It wasn't until well after the product was in the market and selling very well that I converted it to C. I did so mainly to make it easier to maintain and expand.
Could this have benefited from OO? Who cares?
To add insult to injury, the workstation portion of the solution was written in --sit down-- Visual Basic! Yeah! VB. Did it matter that it wasn't Visual C++? Nope. Was it ever converted to VC++. Are you friggin kidding me? Nope. It was making plenty of money as it was.
Plenty of other projects were done using other languages, such as APL, Forth, Lisp and, yes, C++.
My point is that none of this really matters. People have gone to the moon without OO. Whole banking systems have been run without OO. OO has its place. And, when applied correctly, it can be a lot of fun to work with.
Digging through one of the links in the posted article there's an article that suggests new programmers should be taught Python without the OO stuff. What? Crazy.
Every new programmer needs to start with C. In fact, I am convinced that every new programmer needs to start with C and be tasked with writing an RTOS on a small memory-limited 8 bit processor. And then write several applications that run within that RTOS.
Then give them a budget of n clock cycles and m memory bytes and have them create a solution for a particular problem that barely fits within these constraints.
I would then expose them to Forth and ask that the re-write the same RTOS and applications.
Then I'd move them up to Lisp.
From there move into one of the OO languages. My first OO language was C++, but I suppose today I might opt to teach someone Java or something like that. Definitely not Objective-C. Keep it simple.
The above progression will expose a new programmer to tons of really valuable ideas and approaches to solving problems.
Then I'd get serious and ask them to write something like a genetic solver on a workstation in all of these languages and optimize each solution for absolute top performance (generations per second) first and absolute minimal memory footprint as second batch. Lots of invaluable lessons in that exercise.
Now you have a programmer that can identify which technology to use under what circumstance and for what reason. This is a programmer who knows how to get a 100x or 1000x performance gain out of a piece of code or how to get something done 10x faster at the expense of raw performance. Here's a programmer who understands exactly what is happening behind the code.
And, in the end the most important thing still is data representation. You can make a program 100 times harder to write if you choose the wrong representation for the problem being solved. Just like the first article points out: search a small table and the "hacker" solution is almost trivial.
Can you give links and resources to support accomplishing the "CS curriculum" you suggested?
Every new programmer needs to start with C. In fact, I am convinced that every new
programmer needs to start with C and be tasked with writing an RTOS on a small
memory-limited 8 bit processor. And then write several applications that run within
that RTOS.
Besides K&R, which resources will help someone accomplish such a task?
Then give them a budget of n clock cycles and m memory bytes and have them create a
solution for a particular problem that barely fits within these constraints.
ditto
I would then expose them to Forth and ask that the re-write the same RTOS and
applications.
what forth resource are worth learning from? why forth?
Then I'd move them up to Lisp.
what lisp resources?
From there move into one of the OO languages. My first OO language was C++, but I
suppose today I might opt to teach someone Java or something like that. Definitely
not Objective-C. Keep it simple.
> Can you give links and resources to support accomplishing the "CS curriculum" you suggested?
I'd have to Google it just as you would. Here a few points.
C - Well grab a copy of K&R
Microcontroller: Get over to SiliconLabs (http://www.silabs.com) and grab a development board for something like their C8051C020 along with the free tools. Or you could buy a Keil compiler. Study their sample code in depth.
Lots to learn here: configuring the processor, i/o ports, interrupts, timers, counters, clock frequency, serial I/O ports, etc. Make an LED blink. Then make it blink when you press a button. Output an 8 bit counter to a bank of eight LEDs. Have the LEDs count up and down in binary. Have the LEDs scan left-right-left. Figure out how to read a potentiometer with the A/D. Use the potentiometer to control an output that runs an RC servo. Get creative. Have fun. Google is your friend.
RTOS. Get over to Amazon and get yourself a copy of "MicroC OS II". Once you are done with that book --and fully understand it-- you'll find yourself flying at a different level.
Forth: It's been a while since I've used Forth. "Thinking Forth" used to be a go-to resource. I would imagine it is still of value. Not sure what Forth compilers are available for free today. Google it.
Go over to Amazon and get yourself a copy of "Threaded Interpretive Languages". Either learn a little assembler or use C and the aforementioned microprocessor board to bootstrap your own Forth on the C8051C020. You might want to find yourself a copy of "Embedded Controller Forth For The 8051"
I actually learned Lisp while having to customize AutoCAD. They called their flavor of Lisp "AutoLisp". It was a really neat way to learn it because you were dealing with complex graphic entities from the start and Lisp was excellent at manipulating them.
Not sure what the current favorite Lisp implementations might be for different platforms. Google it.
OO Programming: Get a copy of Java and one of the many excellent tutorials out there. Go to Amazon for a copy of "Design Patterns: Elements of Reusable Object-Oriented Software". Read it cover to cover. Five times. While you are at it, get a copy of "Code Complete".
I know this is expensive, but, in general terms, surround yourself with multiple books covering the various subjects. I never buy just one book. I might buy five to ten books covering the subject from different angles. Sometimes you don't necessarily read them cover to cover, but you use them for reference as you move forward. The 'net doesn't always get you there. Searching for solutions on SO does not necessarily teach you what you need to know in order to understand how you got there. Someone can show you the code you need to write an Objective-C class that complies with a Protocol, but, do you really understand the five why's and how's?
For example, if I need to code an FIR (Finite Impulse Response) filter using a new language or on a new platform I can usually do reasonably well hunting for code examples on the 'net. The important part here is that I already know what an FIR filter is and how it is supposed to work. I have implemented them from scratch in anything from hardware to DSP's. So, scouring the 'net for code snippets really becomes a way to help me discover how to express these ideas in this new language rather than me trying to both learn the language and about FIR filters at the same time. Hopefully that makes sense.
I'd much rather hire a programmer that has depth of knowledge in algorithms, patterns and the general subject matter we might be working on (say, as an example, inertial control systems) and tells me that he or she is not fully versed in the language that we have to work with than one who knows the language in his/her sleep but lacks the depth. You can google the language. You can't google the other stuff.
Sorry, this isn't a complete answer. I'd say, above all, as you learn you really have to feed your curiosity with projects you are interested in. If writing a solution for banking doesn't get you excited it is likely that you are not going to be inspired to learn anything new while forced to write that code. If, however, figuring out how to make a robot walk inspires your curiosity, you'll find yourself coding for sixteen hours at a time trying hard to solve problems and learn as you go.
Thank you for this, it's really helpful. I couldn't find the C8051C020 at silabs and wondered if you meant the C8051F020 [0] instead. While this was already a long and helpful post I was wondering if you had any plans to elaborate further (maybe somewhere else) on programming for micro-controllers and RTOS programming.
Yes, the F020 is it. BTW, the only reason I recommend that one is that it is nicely loaded with peripherals that are useful. It isn't the simplest uC to use because of the way you setup the I/O. If you are playing with the development board though this isn't a problem. If you are laying out a board you better have the I/O configuration nailed down or you can shaft yourself inside a microsecond. Don't ask me how I know this.
Let me see what else I can say about programming microcontrollers and RTOS's. I really enjoy programming at this level. It can be a lot of fun and, as a hobby, there are tons of places where one can apply them.
I've used other uC's in their line-up with good results. Of course, Microchip has a number of good ones too. Probably cheaper. It's been a while since I've used one of theirs.
Once you get to a certain level (and to a certain type of problem) moving up to 16 or 32 bit uC's is a good idea. You can even run really compact versions of embedded Linux on some of these. I've played with chips from TI (MSP430 family, if I remember correctly) and others. But first steps are better taken with simpler 9 bit MCU's. It is important to learn what you can with just a few bytes of code and data.
I am currently teaching my son embedded programming using a number of tools. We started out with Lego Mindstorm and their graphical programming approach. That didn't last long. I am sure there are people who love it, but I find it incredibly restrictive. It is very, very difficult to express and describe complex logic with icons and wires.
I've had the same experience at a much higher level. When developing with FPGA's you have the option to use graphical (schematic) tools to describe functionality and have the compile infer circuitry. When I was getting started with FPGA's I thought it'd be great. It didn't take long to realize that the graphical approach is limiting and hard to maintain. A few pages of Verilog (a C-like language used to describe circuitry) is light-years away in terms of usability, expression and maintainability.
Digressing. So, we moved to RobotC from Carnegie Mellon:
Very cool. Neat way to learn some level of embedded programming without having to go too low level. You have high-level commands to run the motors, read sensors, buttons, etc. It makes it easier to focus on learning the basics of programming, robotics and algorithms.
I also have him cranking on Java through the use of another wonderful too: GreenFoot, from the University of Kent.
I couldn't recommend GreenFoot more as a really neat way to learn about OO programming and programming in general. Yes, I know, I may have sounded like I was jumping in the anti-OO wagon in earlier posts. That is not the case. I will use any tool at hand and happen to like OO very much when it makes sense.
There are tons of very nice tutorials for GreenFoot that have you writing mini games within a few sessions. Very, very cool tool.
The next step is to move him to a PIC or SiLabs development board and go back to pure C. Start from scratch and write the code to do things like blink LEDs, sense switches and switch arrays, run a double-buffered serial port, etc.
The first project is always something like making a single LED blink.
Then you move on to making eight LED's light to reflect an internal 8 bit value.
Then you turn that value into a counter and you use various techniques to make the count visible. A uC running at 20+ MHz can count so fast that the LED's would simply look like a blur. A beginner might use loops to waste time between output events in order to slow down the count. Then you move on to using timers and then timers with interrupts.
The next step could be to build a simple clock or stop-watch with a set of seven-segment LED displays. Now it starts to feel like a real project.
After that, maybe an RC servo driver and then a multiple RC servo driver. Then make it so that it can be commanded through a serial port.
These are just a bunch of ideas. They have a progression of design complexity in both the electronics and software. With the right guidance and tutorials there's a lot to learn right there and we are not into RTOS's at all.
Eventually you get to a point where you need to start doing more than one thing at a time. This is where one starts to learn about time-slicing the microprocessor in order to do a little bit of one task and then the next and so on. One common pattern is to break-up execution into, say, 1ms slices and hand that millisecond, if you will, to a given task. The task would have to quickly do something and then give control back to the scheduler so that the next task can get a shot. There are a number of ways to do multitasking, each with its own pros and cons.
Example: Say you are building an auto-pilot for an RC plane or helicopter. You are going to need to monitor commands from the pilot (via radio), temperature, voltage, acceleration, gyroscopic and magnetic sensors as well as possibly a GPS receiver and other resources. And you are going to have to do this very quickly in order to create the illusion of doing it all simultaneously. This is where multitasking and an RTOS come into play.
Multitasking doesn't necessarily demand an RTOS. I've done plenty or projects where lots of I/O is being services without a real RTOS in place. There are well understood techniques to do this sort of thing.
The field is vast and can be very interesting and rewarding. I'll probably have my son go through the above (not the auto-pilot, that was just an example) and the task him with something like building a sprinkler controller or some such project entirely on his own at some point.
I am hoping that by the time he gets to college he will have programming in his DNA. This will allow him to focus on higher-level work. I seems to be developing a real interest for robotics, which could lead to AI and other interesting fields of study.
Incidentally, I am doing all of this because, of course, I want to teach my kids everything I know. I am also very interested in teaching other teenagers about programming, robotics and technology in general. So, I am using my oldest son as a test subject to develop the framework for a potentially neat tech summer camp for teens to be launched here in the Los Angeles area next year. We'll see how that goes.
Thanks again for a great post. Not sure this really adds anything to the conversation, but felt that I should thank you again. For what its worth, I never thought you were dismissive of OOP; your original post seemed rather clear, if rather alpha in tone, that you thought the important thing was to use the tools at hand.
Funnily enough I was asking for the same reason that you had in mind: I have a child with an interest in robotics who recently hit the the limits of Mindstorms. Alas it has been 15 years since I last did any embedded programming so I am rather out of touch. (Indeed it has been a while since I have done any programming.) The silabs dev board seems to be an amazing deal in terms of the amount of peripherals and documentation available, thanks for pointing that out. I think I shall get one and have a fool around, it looks like great fun!
It still is, though "Starting Forth" is probably a better starting point. There's a variety of Forths available today, like GForth: http://www.gnu.org/software/gforth/
Interested programmers might want to check out Factor as well; it's heavily inspired by Forth: http://factorcode.org/
Why is this comment ranking above the much more insightful and beneficial thread started by malandrew? It's partly a question of how HN works but I assumed comments with the highest karma filter up to the top? If that's true, I must also question the community: why has this received so many votes?
It only serves to make robomartin look like an idiot by maki g fun of him. He is most definitely not an idiot and this commenter makes it clear that they haven't read the entire comment.
I have a lot to take away from his comment and from the thread I mentioned. I'm sure there are a few here who will also have big take aways. It's content and comments like these that make me love HN. The negativity from the commenter can spiral out of control and fatally harm a community if endorsed. I've seen it happen before to another community I deeply loved and it pains me to be reminded of such negativity.
While I agree that the form of this comment was too harsh (so, your second paragraph largely rings true for me), I was wondering why robomartin's comment was at the top in the first place: it starts with a general insult to commenters ("C'mon kids. Not again.") and then continues with a "proof by how much money I made using it" against the apparent strawman "if you don't use OOP you can't do things that are useful (such as make a lot of money)".
The article that Rob Pike was responding to, and Rob Pike's response, were about whether people who use (or do not use) OOP somehow fundamentally better understand "the nature of computation". There are people out there, some of whom I have on my list of "personal heroes", whom are quite clear when asked that they know very little about computation or computers, and yet they wrote a bunch of code and made tons of money anyway; that is simply irrelevant to a discussion about understanding "the nature of computation".
(Yes: I have purposely ignored all of the mentions of an improved CS curriculum in my primary comments here. All of that conversation was off-topic for the argument being made by both the original paper and Rob's response: it doesn't contribute in any way to the argument about whether knowing OOP or not knowing OOP has anything to do with how to best understand "the nature of computation", if nothing else as the things you learn first are often, in pedagogic contexts, either approximations or downright incorrect, and are later updated or replaced by later teachings.)
Thanks. The way I look at it, those who read HN and can filter out the BS don't need anyone to tell them when a post is bad or useless.
And, yes, I have been an idiot on HN a few times. It happens mostly when I, against my better judgement, choose to enter into political discussions. I should not do that. I am trying hard not to discuss politics on HN. My views come from years of having skin in the business trenches. You'll never never find me self describe as a democrat or republican. On HN my views tend to run contrary to the <conjecture>mostly liberal leaning</conjecture> audience here. It's really easy to come off as an idiot if you are not careful.
We are all idiots at something.
When it comes to technology and using it to create products and build a business I am nothing but pragmatic. You use what you have at hand and make the best of it. Case in point: Objective-C. I don't like it. Yet, if I wanted to write native iOS apps it really is the only choice. The frameworks that try to avoid it are always fraught with issues. So, I learned it and use it every day. It's about putting out a product and not about polishing the wrenches, pliers and screwdrivers in your toolbox.
That's not to say that if a new approach or better tools surface I ignore them. Not so. I love good tools.
Because you keep playing the "100s of 1000s of dollars" tune, oblivious to the fact that he only talks about making said amount with his assembler program.
He doesn't write that rewriting it in C made him 100s of 1000s of dollars -- just that he rewrote it to be easier to maintain. And VB was just the front-end for said project.
Plus, all the rest as mentioned as tools, without mentioning money at all ("plenty of other projects where made in APL, Forth, Lisp" etc).
Then you go on to miss the rest 70% of his comment, which isn't about money at all, but discusses a rather "hardcore", old-school way of teaching programming.
Arguing against continuing to have these pointless discussions on HN every 57.3 days.
Of course I care. It's what I do. I just don't see a point in people getting lost in language, editor, IDE --whatever-- discussions. They just don't matter. One can write absolutely-brilliant or absolutely-shit code using any tool one might care to name.
Yeah. Let's get essential. Let's not talk about non-essential stuff like programming languages, IDEs, tools. The only thing that really matters is that the thing you are using can make money. And lots of them.
No. Let people discuss what interests them. Discussions lead to better tools and new perspectives. Stop this "I transcend these trivial discussions and I am such a pragmatic and bricolage type of person."
The problem is that a lot of these discussions turn in to pissing matches that are not constructive at all. In the meantime, if you are a working programmer none of this is relevant to you. If you are working on iOS you are, more than likely, not going to be writing native apps in Python or Lisp no matter how wonderful they might be.
The academic discussions are most definitely necessary. When they take place in places like HN I am not sure that they surface in the best possible light. There are no real conclusions. There are no "calls to action". No test cases are produced and few, if any, real-world results can be pointed to. I seriously doubt that such discussions on HN would start a massive movement to introduce a new paradigm that sweeps a significant portion of the software development universe. I think those things happen far more organically and, by the time they bubble-up to places like HN a significant body of work is already in place.
I'll use as an example something like Meteor. I don't know that full history. I did learn about it on HN --which is solid indication that there's value in HN as provider of programmer-relevant news. However, I don't think that Meteor had it's genesis out of a discussion on HN. And a significant body of work was already in place by the time it made it into HN. And, as far as I know, Meteor wasn't created out of long academic discussions on HN or otherwise. It probably was an idea that coalesced into a project when the founders got together and bounded it around (best guess).
I have seen (and got sucked into) many pointless discussions on HN about these kinds of topics. Discussions about such topics as "to vim or not to vim" can get, to be kind, "interesting", when, in reality, they are pointless. You use the tools that let you get to a shippable product at the time you have a project on your desk and move on. In that context there is no "best". That was my point in mentioning making money with assembler. Did I want to write complex code in assembler? No. Was it necessary due to performance? Nope. Was it a pain? Absolutely. Did it work? Yup. I had lemons and I made lemon-juice. Optimize later.
Oh I worked with the guys he's talking about. They never start with data, they never even consider talking about data. Their world is the world of categories and taxonomies. They treat everything as an organism and favor class diagrams to data flow charts.
Rob Pike is insanely good at what he does. But along the way he maintained insane ignorance about everything he doesn't do. Unfortunately, he forgets to confine his opinions to the important areas where he is knowledgeable, and has grown defensive about the world straying away from his domain. Hence the nonsensical rants against OO and type theory that put incoherent words in the mouths of his straw men.
Contrast to someone like Simon Thompson, toiling for decades on a marginalized language (Haskell) for decades, yet never wasting time insulting people who ignore his work. In fact, he praises his team for avoiding the distrsctions and hassles of attaining popularity before his work is perfected
Part of the reason it comes up frequently is that a large number of educated, experienced programmers have decided that a toolbox without OOP is a better toolbox, than a toolbox with OOP.
But, it is difficult to throw out a bad tool when there appears to be no alternative that delivers its benefits. When that happens, when a tool is devised that delivers what OOP delivers without the problems -- such as the diamond problem, or the problem of having to decide "where" to put functions, or the problem of generic functions of more than one argument -- then at that time those very problems will become much easier for everyone to acknowledge.
The same thing will happen to classes and synchronous message passing, that happened to inheritance. In that case it took a while but an alternative came in the form of interface subtyping or predicate based subtyping, and then it was easy to see that inheritance was more trouble than it was worth.
My approach is a bit different probably because I've done a lot of work across a range of disciplines while using different tools. I have yet to run into a problem that was unsolvable due to the tools or language selection made. I don't think the tools or languages have ever caused undue delays, bugs or unreliability in any projects I've had part on. From robotics and low-level real-time embedded to image processing, DBMS and even hardware development (Verilog, VHDL). Not once have the tools and languages been brought up as an issue.
In my experience most problems come from bad design, bad programmers, terrible data representation, incomplete specs, bad management and a myriad of other issues.
Don't get me wrong, I am first in line for a good-solid discussion on how to properly split a bit in two. However, when it comes to the business of making money by creating products that involve some kind of software, well, pick a tool based on experience and focus on delivering a solid product. No excuses.
- People who went to moon: top 1%
- People who made the "vessel": top 1%
- People who wrote the code: top 1%
- The process of the development: Exacting
- The funding available: Lavish
...
Tools in your toolbox. Yea why not. I've always been an OOP composition-only guy. It was always the model that made the most sense to me. It's efficient and clear. I'm amused that just now people are coming out and saying that this is the best model. On the other hand, I've had a project once where I needed to apply a different function to different objects. No need for function pointers with polymorphism, much much easier so I used that. I'm still against templates to this day, although I use some parts of the STL sometimes when I'm lazy ^^
Sure. Sorry for the late reply, I go to parties.
I have nothing against having more power at compile time. It's just the way C++ does it that I don't like. It's like a language within a language. I prefer the way it's done in D. Check the examples if you haven't, it's compile time done right in my opinion.
I understand, and agree with you about D templates. Static polymorphism and generic algorithms are too useful to give up, so of course I wouldn’t do away with templates; but there’s little reason for them to be the underpowered, syntactically hairy functional programming language that they’ve become.
In the past I worked on a language where there was no distinction between compiletime and runtime code. You simply asked for something to be evaluated at compiletime without needing to change the source; there was the stipulation that it ought to terminate, though, else your program would never compile. ;)
In school I was in love with APL. I got a chance to drop Fortran and sub it with an APL class (which my Physics prof insisted I take) and it was absolutely fantastic. What an eye opener! It kind of screws you up because --as an early stage programmer-- you can easily look at other languages and start thinking that everything else really sucks. Of course, that's not the case.
My list matches yours in probably a slightly different order. I love C because it can be very low level and also allow you to write high level code efficiently. I have written code for countless embedded projects with C as well as used it to optimize computationally intensive portions of iOS applications (where Objective-C would make things 1000x slower).
Forth? I've mostly done robotics with Forth. Wow. What a language. It's a really amazing feeling --if you ever get the chance to do it-- to bootstrap an embedded system using Forth. A little assembly and very soon you are off and running. Write you own screen editor, etc. I had a chance to do that once. It was a blast. There's nothing like writing your own disk controllers and other driver software. Forth makes it possible without a huge amount of pain. Neat.
I have a friend who sold his company many years ago for about $20 million. He had developed one product. It was based on a set of off-the-shelf Z80 STD cards in a frame along with a few custom cards. The software running the system was written entirely in Forth. No OS, just finely-tuned Forth from the ground up. No graphical interface either. The thing was operated via a terminal over RS-232 or RS-422.
So, yeah, lots of very lucrative niches out there in industrial, medical, defense and other markets that have nothing whatsoever to do with creating websites or mobile apps. You just have to find them and jump on them. I've been lucky enough to hook a couple of those over the years, though not as "juicy" as the niche my friend found.
I wonder if the 'net and mobile craze is creating a situation where CS grads are coming out of school having nearly no idea that there are very interesting worlds out there that lie outside of those domains?
It isn't necessarily about being in "hard" industries. Frankly, sometimes it is about just about the proverbial right-place-right-time effect while having the drive (and the balls) to jump into a problem face-first after having identified it. That was the case for my friend and my own experiences. He saw a problem and dove right into it at the expense of everything else. He didn't know how big this thing could be when he launched into it. He developed a good solution and it turned out to be a hit.
In my case I've had weird things come across my table. Some have been total wastes of time. Some have been extremely lucrative. For example, I was asked to build a protocol translator that would receive commands over RS-232 and convert them to a different protocol out of a second RS-232 port and do this bidirectionally. I thought it was a one-off. I built it out of mostly existing hardware and sold it to the company that contracted me to do this for $3,000. Then they came back and ordered another 30 at the same price. My COGS was probably no more than $200. Imagine my surprise.
There are many areas of industry ripe for disruption. Some harder than others for various reasons. This is why I think it is important to be exposed to a lot of corners of the tech world. I'd like to say that I planned what I did, but that would not be true. I was fortunate enough to bounce around a number of areas and learned from all of it.
Take, for example, the CNC industry. After cobbling-up a number of DIY CNC machines I decided to spend my time on what I was actually trying to build rather than getting derailed making "amateur" manufacturing equipment. So, I leased my first Haas VF-6SS vertical machining center. I didn't know the first thing about running such a machine. I kind of knew G-code, but not really. Things are far more serious when you have a 20HP spindle and a table that can move so fast it's scary. Anyhow, after getting up to speed on the technology and being very comfortable writing G-code as well as developing models on Solidworks and programming the machine using CAM software it became very obvious that this industry is ripe for serious disruption. I won't go into all the details here. If you don't have the context of having run these kinds of systems it just won't make sense anyway. Let's just say that they are still in the stone age and it would be fantastic to see someone bring them out of the cave. Tough industry to crack. Lots of crusty non-tech people to sell to.
> I wonder if the 'net and mobile craze is creating a situation where CS grads are coming out of school having nearly no idea that there are very interesting worlds out there that lie outside of those domains?
I don't think so. I think its just making the world more accessible to people who normally wouldn't have entered.
I myself was always interested in CS / Programming, but never had the discipline to learn it. Web stuff got me involved later in life, and now having the discipline, I"m slowly working my way down (as it were..). I think those niche's become apparent to people who, being interested in the subjects, just keep digging.
If you know this, why is he bothering to say that he declines to give information about what he's accomplished? Why explicitly claim something's a secret if it isn't actually secret anymore?
Is it really so hard to understand that there are projects people don't like or can discuss in public?
This does not mean that it's not public knowledge that lots of people have made substantial amounts of money on software written in assembler. Most games written in the 80's for example. Lots and lots of business software written in the 80's and before.
There's nothing unusual about that.
And lots of software people won't want to or be able to talk about in public.
Rob pike is doing system and middleware design, where concepts are not very numerous and often purely technic-centric.
Oop is made for business and real-world modeling, where the first part of the job is to find a good definition/representation of the concepts you're talking about. When you're talking about a banking system, you really don't care if the underlying memory representation of credit card properties will be a hash dictionary or a struct. Your first concern is to clearly define what it is using the correct words. So that you'll etablish a clear mapping of real-world concepts into programming structures.
When rob pike talks about data, he only sees memory and related algorithmic structure. Because on its field, it really is the only things that matter.
The fact that sometimes correct naming and proper conceptual representation is the most important only speaks to someone that does business or real world modeling.
The format of the ppoop article is sufficiently similar that I thought it might be a riff on the classic "The Evolution of a Haskell Programmer" http://www.willamette.edu/~fruehr/haskell/evolution.html (itself derived from a similar work).
> But there's good news. The era of hierarchy-driven, keyword-heavy, colored-ribbons-in-your-textook orthodoxy seems past its peak. More people are talking about composition being a better design principle than inheritance.
Huh? Is he trying to say that composition is _not_ OOP, whereas inheritance is?
I hate to break it to him, but composition is just an expression of the composite design pattern, of which OO is part-and-parcel. You can't do either inheritance or composition without using OO principles.
edit: Ok, I concede that I'm an idiot, but at least it resulted in a lot of genuine discussion.
It's a semantic argument, but composition was how people structured programs in C for 20 years before popular "OO" languages like C++ and Java appeared.
Inheritance was the thing that was new in C++ and Java over existing practice. So therefore people associate OO with inheritance, which seems reasonable to me. (Academic languages have a different history, that's a different argument).
To put it somewhat more bluntly: there were naive C programmers who were writing procedural spaghetti code. And then there were real C programmers who actually knew how to use composition to structure large programs.
And then OO languages appeared. This caused the naive programmers to write spaghetti with inheritance. And the real C programmers were shaking their head because people already had the tools to write maintainable programs (composition), but they ignored them in favor of buzzwords and the "everything should be OO" mentality.
Glad it was helpful. If you want to see why Unix programmers scoff at OOP, I recommend "The Art of Unix Programming" by Eric Raymond.
OOP advocates threw around words like modularity and encapsulation and a lot. But nobody told programmers that Java and C++ have much worse modularity and encapsulation properties than the traditional Unix style.
Ideally under Unix you would have small Java programs and small C++ programs working together and passing messages. But that's not really how a lot of people architect things these days. You tend to get monolithic Java codebases and monolithic C++ codebases. People don't design protocols with care.
The web is also very Unix-y. It's structured around data and simple protocols. Yet for some reason programmers insist on "abstracting" the web with objects. These attempts have uniformly failed. There's a reason for that.
Note that Linus Torvalds is a C programmer -- he doesn't use any "object oriented" languages. Structuring programs around data makes them modular. A main reason is that you can adapt data from one form to the other, to glue together pieces of code. You can't adapt byzantine sequences of method calls and inheritance trees -- you end up having to rewrite. Look at classes in Java programs that have the word "Adapter". Often they are a big smell and barely work, and this is because the fundamental abstractions being used are non-compositional.
I understand that this was the mentality in "The Art of UNIX programming", and it is really powerful. The problem is that UNIX never provided a good way to compose programs that need to exchange more than text.
As a result, the outputs of a UNIX-like system are great for programmers but terrible to use by everyone else. For example, if you need an interface that display a nice button to execute a function, you're out of luck with UNIX. The more detailed the interface, the more complicated it becomes. That's why OO has become the de facto way to create user friendly UIs.
It's true. Parsing is a pain and a source of security holes. There are a lot of people trying to do things with structured data over pipes. But there are a lot of ways to do it wrong, and consequently it hasn't caught on much. I've been working on some code for awhile which is too much to go into here, but it's trying to solve this problem of text-over-pipes being too limited, without introducing the tighter coupling of OOP.
You could argue that JSON web services are basically this, but in practice I don't see that they're used in a strongly compositional way.
However, the funny thing is that OO arguably isn't the way we create interfaces anymore. We create interfaces by sending domain-specific languages like HTML and CSS over the network.
Although I guess you could argue iOS and Android interfaces are created with OOP, and I won't deny that it's a successful paradigm there. Objects make a lot of sense for things like games and writing CAD programs and so forth. The "modeling" works in those domains. It doesn't really make a lot of sense for server software. Basically OOP is a domain-specific language IMO.
Consider carefully that Rob Pike is co-author of a well-known programming language which doesn't really have inheritance, but in which composition is trivially easy.
I think that he may know more about this particular topic than you do.
No, it doesn't. You show someone he's out of his depth by attacking his argument, not by pointing out that the counter-argument comes from someone famous. Are you seriously claiming that no one you've never heard of can possibly argue against a point made by someone you've heard of? That's a sheepish mindset.
The fact that Rob Pike wrote a usable language that is not OOP in the sense that the commenter thinks of OOP, in which you have composition without inheritance is direct evidence against the commenter's point of view that to do composition you need to be doing OOP.
I would disagree. If you call a method that you have because of an anonymous field, and within that method you make a second method call, the second method call gets the method for the anonymous field's type, and not your type. Thus you can't override it.
This is very different from how inheritance works in Smalltalk, Java, Python, Ruby, Perl, etc, etc, etc. In C++ terms their methods are all virtual, hence subclasses can override them. And such overriding is a central part of standard OO designs.
Given that Rob Pike embraces neither OOP nor functional programming, one has to wonder what he really means by composition.
The closest thing to composition or reuse in Go are the typeclasses and it's pretty weak. I hope he enjoys specializing his maps and reduces for every type instance.
Rob Pike has in the last few years spent a lot of time complaining about the younguns and the slightly less young younguns, but I've not seen him talk in concrete terms about what he thinks the solution is.
Other than, "use Go", which doesn't actually answer the questions that are raised by his complaints. Like how Go somehow supports composition in a superior way. I've spent some time with Go, built a few projects and services. I wasn't impressed.
I'll just keep using Clojure and Python until Pike starts talking about what he actually means to do to solve these problems.
Hickey is an iconoclast too, but he's sensible enough to stay focused on what can move the profession forward as opposed to complaining profusely on Google+ every other week.
If you really want to learn something interesting, you'd be better off learning about the fundamental relationships between code, data, state, and objects such as Hickey has covered in his past talks.
They're enlightening even if you don't care about Lisp and the talks themselves aren't really in terms of Clojure except to explain how it does things differently.
Not a language but a library: see SNIT (Snit's Not Incr Tcl) at http://www.wjduquette.com/snit/ is a fairly complete object system for Tcl that eschews inheritance in favor of composition.
As a guy who does a fair bit of reading and teaching, I can only sympathize for the writer of the paper Rob is criticizing.
When you come up with examples you have to deal with two conflicting forces. One one hand, they have to be simple enough not be skipped over. On the other hand, they have to be complex enough to seem real. The balance is never right. It doesn't seem fair to criticize on that account. It's too easy.
> Every if and every switch should be viewed as a lost opportunity for dynamic polymorphism.
The truly sad thing about OOP is people not embracing the duality between if-statement dispatching (pattern matching) and oop-style dynamic dispatching.
In situations where you have a fixed set of types its better to use switch statements and even in other cases, sometimes its better to still use if statements to avoid scatering your code all over the place (specially if you compiler warns you if you forget to handle on of the cases when updating things)
Pattern matching yields more type information than if statements and can match recursively on multiple arguments (which even multi-method dynamic dispatch cannot). However, it always dispatches on a closed sum type, whereas OOP-style dynamic dispatch is on an open sum type, so the mechanisms are useful in different circumstances.
The reason is that no new type information is gained when you branch.
When you pattern match, however, you gain new names in scope that have new types. This is new type information such that the branch choice represents new information not only in the program position, but also at the type level.
For example:
if(x != NULL) {
.. compiler does not know if x is null or not ..
} else {
.. ditto
}
Whereas with pattern matching:
case x of
Nothing -> .. Scope gets no new value.
x is a Maybe type, not usable as a direct value
Just y -> .. Scope gets "y" as a value of the type
inside the maybe, which is directly usable.
Also, you can define a function like:
f (Just (Right x)) (y:ys) = ...
f (Just (Left e)) [] = ...
f _ xs = ...
which pattern-matches multiple arguments at the same time, including recursive pattern matching (Right inside Just, Left inside Just, etc).
If you meant the difference regarding open/closed sum types, I can expand on that.
I don't want to argue with anybody-- whatever you think is the right way to program is fine with me but..
The article he pointed to was really funny. I think I worked with guys like that who were so over the moon about OO that they made everything an object, encapsulated a bunch of objects inside an object with no polymorphism. No advantage that I could see except that it became a habit to make everything an object.
Objects did a lot to advance programming and they still can be very useful. Like many people have said here already: use the tool that is appropriate, keep an open find.
Javascript's best feature is that almost any routine can be written without objects. And those that have to be there are hidden from sight like objects should be.
There's nothing ridiculous about the OO pattern in the article Pike is talking about.
When you're trying to demonstrate OO concepts, OO has a disadvantage because it is needlessly complicated for the simple example you're trying to illustrate. The hacker approach is always going to look more sensible than the OO approach.
Once you get into very large enterprise systems the a-ha OO moments really start to pile up.
Local discussion focused on figuring out whether this was a joke or not. For a while, we felt it had to be even though we knew it wasn't. Today I'm willing to admit the authors believe what is written there. They are sincere.
But... I'd call myself a hacker, at least in their terminology, yet my solution isn't there. Just search a small table! No objects required. Trivial design, easy to extend, and cleaner than anything they present. Their "hacker solution" is clumsy and verbose. Everything else on this page seems either crazy or willfully obtuse. The lesson drawn at the end feels like misguided epistemology, not technological insight.
It has become clear that OO zealots are afraid of data. They prefer statements or constructors to initialized tables. They won't write table-driven tests. Why is this? What mindset makes a multilevel type hierarchy with layered abstractions better than searching a three-line table? I once heard someone say he felt his job was to remove all while loops from everyone's code, replacing them with object stuff. Wat?
But there's good news. The era of hierarchy-driven, keyword-heavy, colored-ribbons-in-your-textook orthodoxy seems past its peak. More people are talking about composition being a better design principle than inheritance. And there are even some willing to point at the naked emperor; see http://prog21.dadgum.com/156.html for example. There are others. Or perhaps it's just that the old guard is reasserting itself.
Object-oriented programming, whose essence is nothing more than programming using data with associated behaviors, is a powerful idea. It truly is. But it's not always the best idea. And it is not well served by the epistemology heaped upon it.
Sometimes data is just data and functions are just functions.
Gotta agree with Rob Pike here on this. The path to salvation comes through simplicity not though complexity. Austerity is the way forward. Making do with less is more.
This is not a new realization. Some enlightened people never allow themselves to be deluded.) Brian Harvey is one of them.
OOP is just a set of conventions which could be implemented efficiently even in Scheme. CLOS is another canonical example which people prefer not to notice to maintain their comfortable reality distortion.
Everything was solved long ago by much brighter minds that now populating Java/Javascript world. Just imagine (but almost no one could) how much more clean, efficient and natural it will be to implement something like Hadoop in Common Lisp or Erlang - passing data and functions as first-class S-Expressions or even packed Erlang binaries. Instead they re-implemented a few concepts form FP in Java way.
> Just imagine (but almost no one could) how much more clean, efficient and natural it will be to implement something like Hadoop in Common Lisp or Erlang
Hell, let's go one step further and implement it in datalog:
I think you are on to something, but I don't think the root of the issue is OOP but rather a "C-like" syntax and all the baggage that seems to come along with that.
C is great (I absolutely adore it) but despite the numerous reasons that C++ did it I think we would be in a better position today if the fad of making new languages "C-like", even if just superficially, never took off. At each step, it is hard to point the finger at any one person (even in retrospect, it is hard to really fault Stroustrup), but I feel nevertheless too many prior advances were ignored along the way to modern Java for far too long.
A terribly superficial but I think potent example of how "C-like" has done harm is that languages have keep using it's abusive declaration syntax for so long. It is so clearly absurd and unnecessary that it is a wonder that people haven't started dropping it sooner. Instead, such as in the case of Java, it seems they have just redefined what is idiomatic in order to avoid the harsher cases seen in idiomatic C. At least Go strays from the example set by C, though it still falls a bit short I think.
Basically I see the primary driving force of many trends in programming, including to some limited degree OOP, to be pain inherited from C.
Agree. That is something I dislike about go. A lot of rigth things, ugly as hell c syntax. For people that love C-like is hard to understand how taste that bad for people that love something else. Is like OO vs FUNC, C-like VS anything else.
Yeah. Go strayed far enough (compared to say, Java. It is absurd how close they toe the line...) that I can enjoy it, but further still would be nice. Rehashed declaration syntax and multiple return values are welcome changes. The rest? Eh, I would still prefer s-exps. Oh well.
This was not so obvious a few years ago. A lot of people believed that OOP always made programming better, and was a requirement to creating maintainable programs. Some still do.
FP had its little Joe-programmer renaissance ca. 2007 slowing in 2009. It's been a while since anything approaching a majority thought OO-or-bust for all purposes, perhaps 2004-ish.
I don't really understand the justification for "There is no silver bullet". How do we know? There may very well be a silver bullet, after all, we've already had some!
Programming now using modern tools is at least an order of magnitude more productive than programming using punch cards or other methods used back in the day.
Why are we so sure that all the silver bullets are behind us?
Roughly half of the characters in your comment are part of platitude-phrases. By my count, exactly zero characters are about how to tell what the right tools are, which would at least have been making a concrete claim. This is unsatisfying.
OOP's unique feaures can't deal with massively parallel CPUs. In the future the following will not be allowed:
The following are not allowed for the core data in your code.
Recursion.
Variables declared with the volatile keyword.
Virtual functions.
Pointers to functions.
Pointers to member functions.
Pointers in structures.
Pointers to pointers.
goto statements.
Labeled statements.
try , catch, or throw statements.
Global variables.
Static variables. Use tile_static Keyword instead.
dynamic_cast casts.
The typeid operator.
asm declarations.
Varargs.
OOP may live at the top level of granularity, but when it does for working with your data, OOP is not compatible. You can choose which module to run with polymorphism, but you data can't be be processed with virtual functions.
With "in the future" you mean "currently, in a restricted subset of C that runs on massive numbers of very simple cores". Massive parallelism is also possible with somewhat less simple cores; CUDA and OpenCL also started from the "bare subset" philosophy but gradually expanded to allow more flexibility because developers demand it.
And of course massive parallelism is also possible with normal CPUs, in a cluster or in the cloud, and there an entirely different set of restrictions hold, not so much on the programming language but on higher level design.
Whether or not you need to restrict yourself to a subset of C completely depends on your requirements. The future is heterogenous, not homogenous [1].
Notice that C++AMP is a language extension designed specifically for heterogeneous computing, and it is where this list comes from.
The article you posted was rather extensive and as somebody who works in HPC, I can say that I disagree with many points in link provided. Also it was too god damn long to read. Most notably, that somebody needs to actually write the cloud implementation, will still require models like OpenACC or CUDA.
There is not reason hat whatever mechanism was used to push parallelism onto the GPU can't be used for moving it onto the cloud.
There is an infinite amount of musing that can be made against and possibly in favor of expanding the acceptable language features in threading kernels. Yet, it is safe to say 3 things.
1) Simple kernels run faster
2) Current specifications are closer to C++AMP's restrict(amp)
3) Cloud computing uses GPUs for the data crunching
My point is that you're focusing on low level kernels only, which obviously need to be simple and highly optimized. However, the number of people actually writing lowlevel HPC code (the super-optimized number crunching inner loops, usually embarrassingly parallel), compared to highlevel code is very small, and certainly isn't the only focus of "the future". It's safe to say that the number of platforms that support more advanced programming features (be it object orientation or closures or message passing or...) will only increase, not decrease. Of course no one wise will be calling virtual functions in inner loops, but they are perfectly fine to use for control flow, configurability, modularity, etc.
I just feel like this entire industry gets caught up in trivial matters. What language, what paradigm, what editor. Not that these aren't valid decisions that need to be made but these are problems I can solve in a few minutes or a few hours of thinking about it. The big problems I face are people problems. Miscommunication. Lack of accountability. Developers going rogue. Management not understanding. Users not being trained properly. I could go on and on and I may sound like a grumpy old man. But I just do not get it. Choose your platform and be okay with it. Understand there are others. We don't need one platform or paradigm to produce great things.