Hacker News new | past | comments | ask | show | jobs | submit login

I believe Knuth says in his article on Binary Search in the art of computer programming that it took some ridiculous number of years from when Binary Search was first described to a implementation free of bugs.

I find it highly likely that the majority of programmers would produce a buggy version of Binary Search on their first attempt. Why? History indicates programmers often make small mistakes even when writing simple algorithms. A survey of 26 papers on variations of binary search found that 6 of the papers had serious errors in the published algorithms.[1] 4 of the errors were "design" errors. That is the algorithm they designed had a fault. 2 were implementation errors (1 in assembly, 1 in cobol). All of the errors were published in a peer reviewed publication. Therefore, even peer review does not always spot errors in a "simple" binary search algorithm. Why would you expect recent graduates to do any better?

[1] http://comjnl.oxfordjournals.org/content/26/2/154.abstract




That's fascinating that less were implementation errors than design errors, though "the distribution of errors among families of algorithms is not uniform" accounts for that I guess. Though the paper is from 1983...

I scanned through the paper and found no explicit mention of the typical coding error caused by using M := (L+H)/2 instead of M := L + ((H - L) / 2) (though the paper interchanges both). So I suspect a re-analysis would find more coding errors than design errors in languages without arbitrary-sized integer auto-conversions. My reasoning for that conclusion is based on: http://googleresearch.blogspot.com/2006/06/extra-extra-read-...

Of course, given the stories of all the CS-degree wielding applicants who flat-out can't do FizzBuzz, and the amusing incorrect designs/implementations of commenters who scoff at the notion of not being able to do FizzBuzz and try to prove they can but fail, I was already inclined to believe there's a lot of incompetence to go around. It's not even necessarily a problem with the CS programs, but there is the danger of learning to recite an algorithm's steps word for word without knowing what it means or what it's useful for and fail with a live use case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: