Hacker News new | past | comments | ask | show | jobs | submit login

I like how you qualify program length as a fallible measure (they're more guidelines). Similar to your story of overdoing tiny classes and short methods, due to a belief from programmers you admired, I once dogmatically followed this as a rule, always extracting code than was used more than once, and only doing so then. I think that's a standard learning stage, where you learn something, over-apply it, then learn distinctions for when it is appropriate and when not.

Optimizing for length is only one criteria - optimizing for clarity is more important (strange observation: in writing, redundancy enhances communication); optimizing for flexibility/change is another. I like the idea of just expressing your current understanding, very simply - not weighed with suspect prophecies. Change it as your understanding improves; as you reuse it. Brooks observed that having a spec and different implementations leads to a more robust spec; and there's an idea of not generalizing code until after you're implemented it three times, for different purposes. This is the opposite of architecture astronautics - being grounded in actual instances of concrete experience.

So, I give up a simple, single theory of how to code, and I'm lost - which is perhaps an accurate appraisal of our current understanding of programming. Only the actual details of a problem guide you.

From what you said elsewhere, I think the simple key is to keep focusing on the problem, not the program. My old supervisor said I was over-concerned with theory. "Look at the data!" he admonished me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: