Hacker News new | past | comments | ask | show | jobs | submit login

My girlfriend, a CS double-major who wanted to learn JavaScript, tried Codecademy earlier this year. She spent more time trying to guess the exact output the solution required (i.e. the exact string) than actually programming. The app provided no feedback besides "you're wrong". In a few cases, I actually had to debug their minified JavaScript code to determine what output the program expected. Eventually she became really frustrated and quit.

I'm not sour on Codecademy, but it seems to me they need better ways of evaluating completion of an exercise.




Udacity provides the feedback: "Your code passed 7 of 11 tests. It failed on test: input = [0,0,0]."

The tests are arranged in order of normal input to increasingly strange user input, i.e., edge cases.

Should be easy enough for Codecademy to implement that.


This issue definitely isn't unique to Codecademy. I spent way too much time figuring out the exact combination of whitespace that would satisfy CodeEval. I suppose finding the right set of fine-grain test cases remains a problem. Perhaps they should start using QuickCheck or some other random testing + test case distribution combinator system (if they aren't already). Those can systematically build up from small base cases to odd corner cases.


The arrangement at 4clojure.com seems to work well for this.


appreciate the comments, bentlegen - we're working on our evaluation of answers. we try to be fairly loose, but we're always improving. hope you (and your girlfriend!) will give us another chance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: