Am I alone in thinking Project Euler is not particularly well suited to learning to program?
When I looked into it, many of the problems may be tackled in a programming language through brute force, but cleverer approaches usually come from mathematical manipulations rather than programming insights. The mathematical emphasis didn't seem to lead naturally to higher level programming techniques involving modularity or abstraction beyond the functional level.
"The mathematical emphasis didn't seem to lead naturally to higher level programming techniques involving modularity or abstraction beyond the functional level."
Before you can apply modularity or abstraction, you need some code you can modularize or abstract.
Project Euler gives people the opportunity to write code that does something. Before someone writes code that does something, any talk of how to organize code will be meaningless.
If you look at the points of emphasis in the AP curriculum in the article, they jump immediately to the code organization phase, before students have successfully solved some simple problems with code. Some number of Project Euler problems should be a prerequisite before techniques for organizing code are even mentioned.
I at least disagree with you. I found they helped me from moving from novice to proficient. That was quite a few years ago tho and I wonder how much increasing CPU has meant you don't hit the ceiling of brute force taking excessively long, which forced you to program more elegant solutions.
What I found most appealing about them is that they're a 1/2 hour task at most, nice bite-sized challenges.
And from what I remember, admittedly only doing the first 20 or 30, while some of it was mathematical manipulation it also forced you to do things you might not be so comfortable with, like recursion or even simple things like storing the primes in an array so you didn't have to check all the numbers again.
It also made me feel more confident when the 'ideal' solution was either the same or practically the same as mine. And invariably if it wasn't you'd learn a new language trick.
Then again I was always very comfortable with maths and would invariably spot any mathematical manipulation 'trick' immediately, so perhaps I'm a bit of an outlier. Although I never went past A-Level standard (18).
I'm pretty comfortable in maths too (e.g. I never scored below 98% in any of my college exams in mathematical areas), but my point is that the problems are not programming focused, rather, they tend towards being mathematics focused.
I think jlees makes an excellent point, that PE manages to bridge the gap between the basic "hello world" stuff and the more "algorithmic-y" aspects of programming. In my case that was what really got me snowballing toward being a reasonably capable coder.
I do think you have to be excited about discrete math to really plow through the problems, but the nice thing is, discrete math is the easiest kind of math to get hooked on, just because it's not much more than fancy counting.
My gripe with PE is that it's made up of specific cases instead of generalized problems. Sometimes, the solution that most people arrive at, and that is accepted by the judge, is one that fails in the general case.
The easiest solution, and the one given as an example, is to truncate all the numbers and add them as doubles. But that wouldn't work with all possible inputs of a generalized version of the problem. It's not a very interesting solution either.
I think PE trains future programmers to ignore edge cases, which is exactly the thing that causes the most bugs in real world software.
EDIT: googling solutions to this problem that people have posted on their blogs, I find most fall into one of two categories:
1. Trivial solutions using a bignum library or a language with built-in bignums. These are correct but nothing interesting was learned by writing them.
2. Solutions that assert that the numbers can be truncated before adding, which works in this case but not in general, as I noted earlier. The really sad thing is that the people who solved it this way seem to be under the impression that this is a general solution. So the exercise has not only taught them to write buggy code, but has also helped them validate a false hypothesis.
A double has 15 digits of precision. Even if the 16th digit of all 50 numbers is 9, it still would not be enough to affect the 11th digit of the sum.
If all the numbers were solid 9's then you would worry about the loss of precision from adding many small numbers to a large one, but with just 50 inputs at worst you loose 3 digits of precision, which is still not enough to affect the 11th digit.
Only 5 numbers, but you need all digits even to compute the first digit of the sum. There are many less obvious edge cases; any set of numbers that adds up to
60000000000...0000000000
with at least one carry from the last column will do.
Hmm, I guess I would split the numbers in half, and sum each set of half's (making sure to set the exponent correctly).
Then add the less significant half to the more significant half.
i.e. basically add the numbers in order of magnitude. It doesn't have to be just two half's.
Did anyone post this as a solution?
I do remember learning this somewhere - it was a warning about loss of precision when adding lots of small numbers to a few large ones. And the solution was to do all the small numbers first.
Seems like this would be a perfect place to teach this lesson. Do you know if this site does that?
The boundary between recreational mathematics and computer science is flimsy at best; while Project Euler may not make you the world's best programmer, it'll get you well entrenched in some of the fundamental brain tricks in computer science.
That's how I got interested in the whole field - Informatics Olympiad papers and mathematical problems unified the "10 PRINT 'HELLO'" world with the algorithmic one. I think the latter is surprisingly neglected these days, especially among self-taught people (not to say you can't be a great self-taught programmer, but you're less likely to have come at it all from the mathematical world).
See now, the IOI problem sets do emphasize algorithmic thinking - I think they're marginally better than Project Euler problems - but again, it's very low-level stuff. It must be, in order to have a handful of problems that can be tackled in a couple of hours or so.
I happen to believe that taste in crafting of program structure and selection of appropriate abstractions is more important than algorithmic acuity for most modern programming. Computational geometry, graph theoretical manipulations, dynamic programming and so on certainly have their uses, but they are usually a small core of something and not reimplemented often. Maintainable programs are a bigger issue IMO; and an excess of focus on the finer details of computational algorithms doesn't necessarily solve that issue.
I rather think it can hamper the goal. Really efficient algorithms often "break the rules", exploiting commonalities or attributes that cross abstraction boundaries. Take an ultra-simple example: finding the index of a (possibly missing) element in an unsorted array with a linear search. The fastest in-order sequential algorithm is to add the element being searched for to the end of the array, and then iterate through the array and halt when the element is found. This reduces the amount of work that needs to be done over a more simple search because it removes the need for a termination condition test (index in array less than length of array) on the search loop. But it also breaks other things; it won't work in the presence of threads, and it's surprising that a search operation is not a read-only operation. Adding may need a memory reallocation; if the array is large and the growth algorithm is geometric (which it should be), then it could very surprisingly cause an OOM. Sure, you can get around this by pre-allocating the required slot, and detecting multiple threads etc., but now you've made things more complicated and harder to maintain in the search for the most efficient algorithm. The goal of efficiency can cause more assumptions to be made or pre-conditions required, and for these assumptions / pre-conditions to leak across abstraction boundaries, ultimately making maintenance changes have multiplicative rather than additive complexity costs.
So I think it's fine to target big-O algorithmic costs when it makes sense and doesn't contort the software architecture too much, but applying an excess of problem insight can work against better architecture. Premature optimization, as tautologically as ever, is premature.
(I write as someone self-taught who competed in IOI etc. The problems sets were fun, but I wasn't under any illusions that I was learning a whole lot. Using recursion and branch pruning to solve combinatorial search problems is a distraction when you're better off looking at the bigger picture, dare I say it, the business picture.)
I've also taken part in the IOI - twice - and like you, am a self-taught programmer (I was when I took part in the IOI, I've since also studied computer science). In my opinion the IOI problems, at least for a competition, are pretty good as they focus on using correct algorithms and data structures as opposed to hacks or mathematical tricks. Brute force solutions almost never produces good results for IOI problems. Edge cases must usually be considered as the test cases used to grade the solutions try to be varied and cover as many cases as possible.
IOI does nothing to teach good software architecture and I don't think any similar competition could ever hope to do that. It does, however, teach algorithmic thinking, use of data structures, understanding of time and space complexity and coding under pressure. (Well, all of these to a degree - like you said, recursion, branch pruning, trees and graphs and the like. more to be desired? Sure! But not a bad start, especially for high school students who are unlikely to have much exposure to problems which would otherwise have forced them to learn these things - I know people even now who, having only ever done business applications, would fail miserably at solving any IOI-like or algorithmically-heavy programming problem because they were never sufficiently introduced to the things the IOI focuses on.) All great lessons, even if there is more left to learn to become a great programmer.
I personally dislike PE because the problems I attempted seemed like they could often be better solved by employing a mathematical trick or insight as opposed to clever use of algorithms, data structures or programming ingenuity. My programming skills are much greater than my math skills, so I guess ultimately, because of that, PE just doesn't do it for me.
Brute force doesn't even work for a large amount of the problems, but sometimes finding ways to solve the problems without using brute force can teach you how to think like a programmer.
But you are right in thinking that the majority of the problems are better left solved by mathematical techniques beyond most of us without a degree in math and high knowledge in math.
You are correct that the Project Euler approach isn't well suited for learning to program, although I would qualify it with "for some", if not most - especially the less mathematically inclined, or those who care more about things like social or real-world problems. You have to show programming and computer science as being more than just solving artificial puzzles or doing math & logic for its own sake.
It should be noted that this site and others have been around for 10 years, and in the meantime the number of people majoring in computer science has actually dropped (I think it finally turned upward this year though), and the proportion of female and minority students in computer science has actually dropped. Notice too how the author of this article, the author of the project euler site, and, from what I can tell, everyone here is a guy. Every article I read about 'why johnny can't program' and so forth are by guys, usually older folks who already understand the relevance of programming, but perhaps can't understand how it might not be so evident to others. I know it took me a long time (almost ten years) to see it. When I was first introduced to programming, to draw an American flag on the screen in BASIC, it was boring and pointless compared to the colecovision next to it :) It wasn't until I created a CGI perl app for letter-writing with our amnesty international group and also a hypercard program for an HCI class that I dived in.
Teaching programming or computer science 'for its own sake' is not going to help most see the relevance of it to their lives and to the world. Show how you can accomplish things with programming, how you can help people, and so forth. Like someone else pointed out, the rosettacode site is similar, but at least it gives some more practical problems.
Also, there's the fact that computer science is not required in most schools, and thus, like engineering, most don't ever think to major in it or work in the field.
I haven't spent much time with Project Euler, but I can see how puzzles and riddles could be a more enticing introduction to programming for a lot of people. I'm not particularly averse to learning via giant tomes on programming or in a classroom. Then again, I think I found that I learned a lot more about problem solving in Python (and had way more fun) doing the Python challenge (http://www.pythonchallenge.com/) than I did following the examples in Dive into Python.
As a fan of David Münnich's notpr0n, I must say that the pythonchallenge website looks like a really cool spinoff and seems more approachable than Project Euler.
Edit: Not to mention you're not restricted to using Python, just like Project Euler. What's not to like?
As someone with a strong math and science background but who came late to the code game, who has never been able to align the stars for a CS degree or work in programming, Project Euler has been crucial. I can wrap my head around pretty much any math you want to throw at me, but Project Euler gives me the opportunity to know I'm tackling a language-skill-appropriate challenge and immense satisfaction once it's figured out.
My experience is that it's a good way to solidify your understanding of basic syntax and it's great to think about speed of execution, but not so good at providing structure to think about.
The brute-force approach can still be fun to code, and regardless of your method, it's probably challenging you to think in ways that your typical job doesn't require.
That said, I still wish there were a structured documentation of solutions which walks one through brute force through elegance. Without that, I don't think most Euler-ers are going to progress.
Succinctly explains why programming can be rewarding to do.
It also describes the playground people of a certain age had: 'READY>'
I hear kids these days can do well learning Python - if they can get an adult to install the dev tools. There's got to be something as accessible and interactive as BASIC was, and that should be everywhere.
That's kind of the goal of Hackety Hack[1], to provide a framework for learning to program coupled with fun, simple libraries for graphics, sound and whatnot.
And making basic/visual basic on top of javascript is a little challenging since it is case-insensitive and statically typed (later versions).
But I'd especially recommend Scratch from MIT for kids: http://scratch.mit.edu/ Many people are working on ports or similar tools in flash and html5, but none are quite there yet.
It's tempting to generalize: If programming is best learned
in this playful, bottom-up way, why not everything else?
Could there be a Project Euler for English or Biology?
My biology book from farming school 10 years ago: On each page there would be questions that you were supposed to think through before continuing. On the next page there would be answers.
Most of the questions were serious, but in between there where jokes like "What is the white stuff on the outside of chicken poop?" to which the answer was "It's chicken poop too!" and then a longer explanation about how they get rid of uric acid.
Agriculture, I think, is one of the next big fields for improvements [1]. But it has always been a source of interesting problems and solutions.
"[...] One way is to randomize the confounding element so that it’s effect is not influencing the element under investigation. If you have to find out which fertilizer is the best, but the water and soil have different properties over your massive 20 acre field, then you need to randomize where you put what fertilizer. It gets even more complicated since you might randomly put them in a really bad formation, so these super smart people came up with ways around that too. [...]"
[1] Think about robotic green houses in cities for instance. If you isolate your system well you might not even need pesticides and you might save tons of water and energy.
Why not? You're not born knowing how to farm, and if a classroom can turn out better farmers, it makes sense. There's a lot more to crop growing/harvesting and animal husbandry than "throw seeds over there, and while you're waiting for plants to grow you can take these animals' eggs."
I have read an occasional article on modern farming methods -- some for smaller farms, some on large-scale agriculture -- and no matter how much science and careful methodology I previously guess that they use, the articles surprise me anyway.
That's a farming program inside a large university, not a farming school. Are there actual farming schools or does "farming school" just refer to an agriculture department?
This essay is brilliant! I think the most important takeaway is the last part: it is possible to learn cognitive skills using well-designed software (software coaches would be useful but less so in learning physical activity, e.g. Learning to play tennis).
Couple this idea with raw informational input like that offered by the Khan Academy or Wikipedia and you can generalize Kasparov's point to many other fields. This is the future, somebody should do this.
I loved reading this article! While I cut my teeth designing Excel and Access applications using VBA, I ended up using Project Euler to teach myself C#. It was a perfect environment to learn syntax and program flow for the language and ultimately gave me enough confidence to be able to land my first real programming job a few months later.
In the last of the article, the author is basically describing what was depicted in The Matrix - Need a skill? Give me a few seconds to download it and we'll get started...
Edit: Among many, many other examples (William Gibson, Star Trek, even Douglas Adams comes quickly to mind)...
I smiled as I saw this at the top. I know web programming's where it's at, but learning Django's been kind of a chore for me. And with an ok math background, Project Euler's been fun so far. Now working on Problem 11 to compute the highest product in a 20x20 matrix. Think nested lists might do the trick.
I could definitely relate to the whole "let me start with a 1500 page tome". I also tried my luck with a Visual C++ brick about 10 years ago, that worked out exactly as described in the article. To some extent I don't feel those books were ever meant for total beginners.
They aren't meant for people to read. They're meant to sell. Read the highly-entertaining and no doubt at least partly accurate explanation in http://philip.greenspun.com/wtr/dead-trees/story under the subhead "Flaming Summary: Why Computer Books Suck."
Pointless note: Another way to solve that triangle is to flip the left-most leg of it counterclockwise until it's exactly parallel with the rectangle. It is then plain to see it's half of the rectangle.
Do you mean rotate the triangle counter-clockwise? The leftmost leg of the triangle is longer than the left edge of the rectangle, so I don't think it will be as plain as you suggest. You'll end up with something like this:
Project Euler reminds me of my high-school programming classes in the early 90s, especially the "computer contests". Lots of puzzle-type problems and string manipulation.
while i like project euler, i find rosettacode to be more down-to-earth. in everyday life you're much more likely to have to solve rosettacode's tasks (eg download a page from the web, parse something, generate a bitmap, send a mail) than project euler's math-based tasks.
When I looked into it, many of the problems may be tackled in a programming language through brute force, but cleverer approaches usually come from mathematical manipulations rather than programming insights. The mathematical emphasis didn't seem to lead naturally to higher level programming techniques involving modularity or abstraction beyond the functional level.