Python advocacy aside, I agree that OOP is best taught when it "comes naturally" and NOT the first thing programmers should be learning. That is, the ideal sequence of concepts I think would be (along with a possibly contrived but relevant example):
1. Very basic, sequential computation. Add two numbers.
2. Conditional computation. Find the absolute value of two numbers.
3. Computations over homogenous data: Arrays and loops. Find the average of the absolute value of a set of numbers.
4. Computations over inhomogenous data: Structures. Find the area of a set of rectangles.
5. Computations over "even more inhomogenous" data: Objects. Find the area of a set of shapes of different types, and find the centroid of them.
That's where you ask "how would you do this with what you've learned so far", and the learners should start to build different structures with a lot of similar aspects and duplication of code (I realize I didn't explicitly talk about the value of functions/procedures above, I guess it'd go somewhere after 3), and then you show them how much more straightforward it is to do with OOP. Now they know the value of OOP and what "doing it without" would entail, and you end up with a bunch of more knowledgeable programmers who won't needlessly create baroque class hierarchies and write more code than they need to, but will use those abstractions when they have value. There is a huge difference IMHO between e.g. being taught dogmatically "you MUST use functions because abstraction is good" like it is something you should take at face value, and being taught by being given a problem in which you do not know about functions at this point and thus write lots of duplicated code and then being shown the value of eliminating that duplication via functions.
I've seen way too many learners create half a dozen classes with a ton of methods in them when given a problem that could be solved in a single function of a few lines, then have trouble figuring out what to put in the method bodies... it's completely backwards! No matter how much "architecture" you manage to create with deeply nested objects, in the end all the functionality of your code is essentially based on a series of statements executed in sequence, much like they would be in a purely procedural language. The fact that many otherwise highly-regarded educational institutions are churning out programmers who can create dozens of classes with dozens of methods in them, and then not know what goes in those methods, i.e. the actual "meat" of the computation, is something I find highly disturbing.
Why start with computation as a sequence of operations? It might be equally useful (maybe even more useful for some students) to start with computation as a series of reductions e.g. with combinator logic.
Because that's how a computer actually works, and the sequential execution concept is something that is intuitive to anyone who has followed a schedule, a recipe, been taught how to tie their shoelaces, ... , in essence, done any sort of living.
So what? That is kind of like saying that people need to understand how an airplane works in order to ship a package overnight. Sure there is value in understanding how a computer works, and that will probably be taught later in the curriculum; but teaching students how to think abstractly is far more valuable.
"the sequential execution concept is something that is intuitive"
So is the concept of reducing one expression to another; we teach algebra as exactly that long before we teach anyone how to program.
Sequential instruction is learned before expression reduction in real life...
So what? That is kind of like saying that people need to understand how an airplane works in order to ship a package overnight.
But people DO need to understand a bit about the process of shipping a package, if they want to estimate when it will arrive, or if they want to be able to read the online tracker, or even know that they must bring the package to a particular place before fedex will ship it. Or if the package does not arrive, what would you have to do to find it?
Generally speaking, understanding how a computer works is important to programming because the essence of programming is mapping a real problem onto the available computational hardware.
Not that I want to argue against learning about expression reduction, I would agree that should be done sooner than later, but it's not an argument against learning sequential algorithms.
I TAd a CS101 course in grad school. I saw a number of students try do something like this:
func(if(x == 5) foo else bar)
In other words, treating an "if...else" statement as something that can be reduced to a value. This frustrated the students because it did not compile. We had to train them to think in terms of sequential execution, despite the fact that they did not become any better at programming in the process (worse, they had to un-learn a perfectly valid way to think about programming -- only to have to re-learn it later if they took the programming languages course).
What language was your class using? There are so many ways to do what the students wanted to do, be it the tertiary function or a lambda function or even, defining a method/function like (Ruby):
def foowiz(x, threshold = 5)
if x == threshold
foo
else
bar
end
end
And then a call to func(foowiz(x)) would work - even in an OOP language like Ruby.
What did the students need to unlearn?
In the intro CS class I took an age ago, we had to implement OOP as a final assignment in Dr. Scheme, which is a derivative of LISP that is most assuredly a functional language rather than OOP. That did not mean I had to unlearn how lambdas and tail recursion worked.
Why did they have to "un-learn" anything? What they should learn from that experience is that the particular tool they were using did not have a feature they expected. Any reasonable teacher should explain that. It's important because often, extra features come at a cost and while some of those costs have been paid permanently by researchers and software developers of the past, others still require tradeoffs.
Thinking in abstract terms is all well and good until you actually have to run code in a real environment to solve a real problem with the tools you have available.
1. Very basic, sequential computation. Add two numbers. 2. Conditional computation. Find the absolute value of two numbers. 3. Computations over homogenous data: Arrays and loops. Find the average of the absolute value of a set of numbers. 4. Computations over inhomogenous data: Structures. Find the area of a set of rectangles. 5. Computations over "even more inhomogenous" data: Objects. Find the area of a set of shapes of different types, and find the centroid of them.
That's where you ask "how would you do this with what you've learned so far", and the learners should start to build different structures with a lot of similar aspects and duplication of code (I realize I didn't explicitly talk about the value of functions/procedures above, I guess it'd go somewhere after 3), and then you show them how much more straightforward it is to do with OOP. Now they know the value of OOP and what "doing it without" would entail, and you end up with a bunch of more knowledgeable programmers who won't needlessly create baroque class hierarchies and write more code than they need to, but will use those abstractions when they have value. There is a huge difference IMHO between e.g. being taught dogmatically "you MUST use functions because abstraction is good" like it is something you should take at face value, and being taught by being given a problem in which you do not know about functions at this point and thus write lots of duplicated code and then being shown the value of eliminating that duplication via functions.
I've seen way too many learners create half a dozen classes with a ton of methods in them when given a problem that could be solved in a single function of a few lines, then have trouble figuring out what to put in the method bodies... it's completely backwards! No matter how much "architecture" you manage to create with deeply nested objects, in the end all the functionality of your code is essentially based on a series of statements executed in sequence, much like they would be in a purely procedural language. The fact that many otherwise highly-regarded educational institutions are churning out programmers who can create dozens of classes with dozens of methods in them, and then not know what goes in those methods, i.e. the actual "meat" of the computation, is something I find highly disturbing.