My girlfriend, a CS double-major who wanted to learn JavaScript, tried Codecademy earlier this year. She spent more time trying to guess the exact output the solution required (i.e. the exact string) than actually programming. The app provided no feedback besides "you're wrong". In a few cases, I actually had to debug their minified JavaScript code to determine what output the program expected. Eventually she became really frustrated and quit.
I'm not sour on Codecademy, but it seems to me they need better ways of evaluating completion of an exercise.
This issue definitely isn't unique to Codecademy. I spent way too much time figuring out the exact combination of whitespace that would satisfy CodeEval. I suppose finding the right set of fine-grain test cases remains a problem. Perhaps they should start using QuickCheck or some other random testing + test case distribution combinator system (if they aren't already). Those can systematically build up from small base cases to odd corner cases.
appreciate the comments, bentlegen - we're working on our evaluation of answers. we try to be fairly loose, but we're always improving. hope you (and your girlfriend!) will give us another chance.
I'm not sour on Codecademy, but it seems to me they need better ways of evaluating completion of an exercise.