Hacker News new | past | comments | ask | show | jobs | submit login

Am I alone in thinking Project Euler is not particularly well suited to learning to program?

When I looked into it, many of the problems may be tackled in a programming language through brute force, but cleverer approaches usually come from mathematical manipulations rather than programming insights. The mathematical emphasis didn't seem to lead naturally to higher level programming techniques involving modularity or abstraction beyond the functional level.




"The mathematical emphasis didn't seem to lead naturally to higher level programming techniques involving modularity or abstraction beyond the functional level."

Before you can apply modularity or abstraction, you need some code you can modularize or abstract.

Project Euler gives people the opportunity to write code that does something. Before someone writes code that does something, any talk of how to organize code will be meaningless.

If you look at the points of emphasis in the AP curriculum in the article, they jump immediately to the code organization phase, before students have successfully solved some simple problems with code. Some number of Project Euler problems should be a prerequisite before techniques for organizing code are even mentioned.


Put another way;

If the student still can't remember that basic C programs usually start with

int main(int argv, char *argv[]) {

Or what

return TRUE;

does, how do you expect him to go about understanding data structures, search algorithms, OOP?


I think you mean "int argc" ;-)


Yup, I noticed that typo after the edit window expired!


I at least disagree with you. I found they helped me from moving from novice to proficient. That was quite a few years ago tho and I wonder how much increasing CPU has meant you don't hit the ceiling of brute force taking excessively long, which forced you to program more elegant solutions.

What I found most appealing about them is that they're a 1/2 hour task at most, nice bite-sized challenges.

And from what I remember, admittedly only doing the first 20 or 30, while some of it was mathematical manipulation it also forced you to do things you might not be so comfortable with, like recursion or even simple things like storing the primes in an array so you didn't have to check all the numbers again.

It also made me feel more confident when the 'ideal' solution was either the same or practically the same as mine. And invariably if it wasn't you'd learn a new language trick.

Then again I was always very comfortable with maths and would invariably spot any mathematical manipulation 'trick' immediately, so perhaps I'm a bit of an outlier. Although I never went past A-Level standard (18).


I'm pretty comfortable in maths too (e.g. I never scored below 98% in any of my college exams in mathematical areas), but my point is that the problems are not programming focused, rather, they tend towards being mathematics focused.


I think jlees makes an excellent point, that PE manages to bridge the gap between the basic "hello world" stuff and the more "algorithmic-y" aspects of programming. In my case that was what really got me snowballing toward being a reasonably capable coder.

I do think you have to be excited about discrete math to really plow through the problems, but the nice thing is, discrete math is the easiest kind of math to get hooked on, just because it's not much more than fancy counting.


My gripe with PE is that it's made up of specific cases instead of generalized problems. Sometimes, the solution that most people arrive at, and that is accepted by the judge, is one that fails in the general case.

For example, problem 13: http://projecteuler.net/index.php?section=problems&id=13

The easiest solution, and the one given as an example, is to truncate all the numbers and add them as doubles. But that wouldn't work with all possible inputs of a generalized version of the problem. It's not a very interesting solution either.

I think PE trains future programmers to ignore edge cases, which is exactly the thing that causes the most bugs in real world software.

EDIT: googling solutions to this problem that people have posted on their blogs, I find most fall into one of two categories:

1. Trivial solutions using a bignum library or a language with built-in bignums. These are correct but nothing interesting was learned by writing them.

2. Solutions that assert that the numbers can be truncated before adding, which works in this case but not in general, as I noted earlier. The really sad thing is that the people who solved it this way seem to be under the impression that this is a general solution. So the exercise has not only taught them to write buggy code, but has also helped them validate a false hypothesis.


Why would it fail in the edge case?

A double has 15 digits of precision. Even if the 16th digit of all 50 numbers is 9, it still would not be enough to affect the 11th digit of the sum.

If all the numbers were solid 9's then you would worry about the loss of precision from adding many small numbers to a large one, but with just 50 inputs at worst you loose 3 digits of precision, which is still not enough to affect the 11th digit.


A simple edge case:

  10000000000...0000000000
  10000000000...0000000000
  10000000000...0000000000
  10000000000...0000000001
  19999999999...9999999999
Only 5 numbers, but you need all digits even to compute the first digit of the sum. There are many less obvious edge cases; any set of numbers that adds up to

  60000000000...0000000000
with at least one carry from the last column will do.


When you store 19999999999...9999999999 as a float you round it, not truncate it.

19999999999...9999999999 = 20.....00 when rounded to 15 significant digits.

You can even truncate it to 15 digits just to get it in there, and then round it to 14 digits.


Rounding can just as easily give you the wrong answer as truncating e.g:

  100000...000000
  199999...999999


Good point.

Hmm, I guess I would split the numbers in half, and sum each set of half's (making sure to set the exponent correctly).

Then add the less significant half to the more significant half.

i.e. basically add the numbers in order of magnitude. It doesn't have to be just two half's.

Did anyone post this as a solution?

I do remember learning this somewhere - it was a warning about loss of precision when adding lots of small numbers to a few large ones. And the solution was to do all the small numbers first.

Seems like this would be a perfect place to teach this lesson. Do you know if this site does that?


The boundary between recreational mathematics and computer science is flimsy at best; while Project Euler may not make you the world's best programmer, it'll get you well entrenched in some of the fundamental brain tricks in computer science.

That's how I got interested in the whole field - Informatics Olympiad papers and mathematical problems unified the "10 PRINT 'HELLO'" world with the algorithmic one. I think the latter is surprisingly neglected these days, especially among self-taught people (not to say you can't be a great self-taught programmer, but you're less likely to have come at it all from the mathematical world).


See now, the IOI problem sets do emphasize algorithmic thinking - I think they're marginally better than Project Euler problems - but again, it's very low-level stuff. It must be, in order to have a handful of problems that can be tackled in a couple of hours or so.

I happen to believe that taste in crafting of program structure and selection of appropriate abstractions is more important than algorithmic acuity for most modern programming. Computational geometry, graph theoretical manipulations, dynamic programming and so on certainly have their uses, but they are usually a small core of something and not reimplemented often. Maintainable programs are a bigger issue IMO; and an excess of focus on the finer details of computational algorithms doesn't necessarily solve that issue.

I rather think it can hamper the goal. Really efficient algorithms often "break the rules", exploiting commonalities or attributes that cross abstraction boundaries. Take an ultra-simple example: finding the index of a (possibly missing) element in an unsorted array with a linear search. The fastest in-order sequential algorithm is to add the element being searched for to the end of the array, and then iterate through the array and halt when the element is found. This reduces the amount of work that needs to be done over a more simple search because it removes the need for a termination condition test (index in array less than length of array) on the search loop. But it also breaks other things; it won't work in the presence of threads, and it's surprising that a search operation is not a read-only operation. Adding may need a memory reallocation; if the array is large and the growth algorithm is geometric (which it should be), then it could very surprisingly cause an OOM. Sure, you can get around this by pre-allocating the required slot, and detecting multiple threads etc., but now you've made things more complicated and harder to maintain in the search for the most efficient algorithm. The goal of efficiency can cause more assumptions to be made or pre-conditions required, and for these assumptions / pre-conditions to leak across abstraction boundaries, ultimately making maintenance changes have multiplicative rather than additive complexity costs.

So I think it's fine to target big-O algorithmic costs when it makes sense and doesn't contort the software architecture too much, but applying an excess of problem insight can work against better architecture. Premature optimization, as tautologically as ever, is premature.

(I write as someone self-taught who competed in IOI etc. The problems sets were fun, but I wasn't under any illusions that I was learning a whole lot. Using recursion and branch pruning to solve combinatorial search problems is a distraction when you're better off looking at the bigger picture, dare I say it, the business picture.)


I've also taken part in the IOI - twice - and like you, am a self-taught programmer (I was when I took part in the IOI, I've since also studied computer science). In my opinion the IOI problems, at least for a competition, are pretty good as they focus on using correct algorithms and data structures as opposed to hacks or mathematical tricks. Brute force solutions almost never produces good results for IOI problems. Edge cases must usually be considered as the test cases used to grade the solutions try to be varied and cover as many cases as possible.

IOI does nothing to teach good software architecture and I don't think any similar competition could ever hope to do that. It does, however, teach algorithmic thinking, use of data structures, understanding of time and space complexity and coding under pressure. (Well, all of these to a degree - like you said, recursion, branch pruning, trees and graphs and the like. more to be desired? Sure! But not a bad start, especially for high school students who are unlikely to have much exposure to problems which would otherwise have forced them to learn these things - I know people even now who, having only ever done business applications, would fail miserably at solving any IOI-like or algorithmically-heavy programming problem because they were never sufficiently introduced to the things the IOI focuses on.) All great lessons, even if there is more left to learn to become a great programmer.

I personally dislike PE because the problems I attempted seemed like they could often be better solved by employing a mathematical trick or insight as opposed to clever use of algorithms, data structures or programming ingenuity. My programming skills are much greater than my math skills, so I guess ultimately, because of that, PE just doesn't do it for me.


Brute force doesn't even work for a large amount of the problems, but sometimes finding ways to solve the problems without using brute force can teach you how to think like a programmer.

But you are right in thinking that the majority of the problems are better left solved by mathematical techniques beyond most of us without a degree in math and high knowledge in math.


You are correct that the Project Euler approach isn't well suited for learning to program, although I would qualify it with "for some", if not most - especially the less mathematically inclined, or those who care more about things like social or real-world problems. You have to show programming and computer science as being more than just solving artificial puzzles or doing math & logic for its own sake.

It should be noted that this site and others have been around for 10 years, and in the meantime the number of people majoring in computer science has actually dropped (I think it finally turned upward this year though), and the proportion of female and minority students in computer science has actually dropped. Notice too how the author of this article, the author of the project euler site, and, from what I can tell, everyone here is a guy. Every article I read about 'why johnny can't program' and so forth are by guys, usually older folks who already understand the relevance of programming, but perhaps can't understand how it might not be so evident to others. I know it took me a long time (almost ten years) to see it. When I was first introduced to programming, to draw an American flag on the screen in BASIC, it was boring and pointless compared to the colecovision next to it :) It wasn't until I created a CGI perl app for letter-writing with our amnesty international group and also a hypercard program for an HCI class that I dived in.

Teaching programming or computer science 'for its own sake' is not going to help most see the relevance of it to their lives and to the world. Show how you can accomplish things with programming, how you can help people, and so forth. Like someone else pointed out, the rosettacode site is similar, but at least it gives some more practical problems.

Also, there's the fact that computer science is not required in most schools, and thus, like engineering, most don't ever think to major in it or work in the field.


I haven't spent much time with Project Euler, but I can see how puzzles and riddles could be a more enticing introduction to programming for a lot of people. I'm not particularly averse to learning via giant tomes on programming or in a classroom. Then again, I think I found that I learned a lot more about problem solving in Python (and had way more fun) doing the Python challenge (http://www.pythonchallenge.com/) than I did following the examples in Dive into Python.


As a fan of David Münnich's notpr0n, I must say that the pythonchallenge website looks like a really cool spinoff and seems more approachable than Project Euler.

Edit: Not to mention you're not restricted to using Python, just like Project Euler. What's not to like?


As someone with a strong math and science background but who came late to the code game, who has never been able to align the stars for a CS degree or work in programming, Project Euler has been crucial. I can wrap my head around pretty much any math you want to throw at me, but Project Euler gives me the opportunity to know I'm tackling a language-skill-appropriate challenge and immense satisfaction once it's figured out.

That and HN meetups.


My experience is that it's a good way to solidify your understanding of basic syntax and it's great to think about speed of execution, but not so good at providing structure to think about.


The brute-force approach can still be fun to code, and regardless of your method, it's probably challenging you to think in ways that your typical job doesn't require.

That said, I still wish there were a structured documentation of solutions which walks one through brute force through elegance. Without that, I don't think most Euler-ers are going to progress.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: