Hacker News new | past | comments | ask | show | jobs | submit login
Code Is Never “Perfect”, Code Is Only Ever “Good Enough” (exceptionnotfound.net)
193 points by joeyespo on July 18, 2016 | hide | past | favorite | 126 comments



One of the depressing things about code is that it is astoundingly easy to build something that does amazing things in controlled circumstances but is otherwise practically garbage. Promotions, venture capital, and all other manner of rewards can result from said garbage. The “real work” is in figuring out how to move forward from something like that, and lots of stress comes from trying to do so without breaking anything. There is little reward in maintenance.

“Good enough” is really the type of attitude that leads to this unfair balance of effort.

Unfortunately, we live in a world where criticism is levied very heavily on where you are now, and not on how you got there. What, version 2.0 of the product sucks now? Screw it, “1 star”, boycott this; fire the team responsible! Never mind the fact that version 1.0 was a rickety mess, all of those guys left, and anyone would be lucky to get the thing working at all, much less in a way that does not introduce any new bugs.

One of the big benefits of open-source is having a chance to understand what’s going on. Would as many people invest in a product, despite promising behavior in version 1.0, if they could see a critical review of how that version was actually implemented? Ideally, we can see not only how a product appears to work, but how well-engineered it is according to experts.


What you say is indeed important, but I wouldn't say the work before that is not "real work". It's just different types of work. Ask any successful startup founder what their initial codebase was like and they would say--if they're honest--it was shitty.

There's a reason for this. The "real work" you mention comes into play only after the product has reached product market fit and needs scaling. Before that phase, it's all about struggling to make things work, and people don't have time to refactor everytime they need to make a small tweak and experiment. So naturally by the time the "real work" guys come in they will think "WTF the code is so shitty, whoever wrote this is an idiot". But they should know that they're there only because of that shitty code. If they were in that same situation (and rational) they would have done the same thing--write a consequentially shitty code.


Couldn't agree with it more. "Walking on water and developing software from a specification are easy if both are frozen".

Startups are the opposite of having clear requirements. There is a lot of uncertainty. In my opinion, it is usually orders of magnitude harder to figure out what to dot hat gets suers than building something "perfect" that nobody cares about


Early work is closer to a lottery than anything else. If you only look at winners you will end up with a distorted view, but right product, right time, right market is more about luck than skill. MVP is all about validation because there is no way to judge good vs great before you build it. So, the only way to improve odds is to just roll more dice.


You can still write great code (meaning: clean non-leaky abstractions, proper separation of concerns, functional testing, clearly organized, and unintuitive bits documented) without having market validation, and I'd suggest that doing so will actually make it a lot easier for you to shift the pieces of the project, remove them, rewrite them, extend them, etc. as you start to hone in on the marketplace fit, and doubly so as you need to start growing your development team and need to on-board them effectively so everyone can be productive without just rapidly accruing compound interest on the technical debt that's already there.

It doesn't take that much longer to write great code when it's just part of the first-principles of your engineering process. You just do it as you go, but best-practices need to be scaffolded and modelled for everyone beforehand to give them a solid starting point for what "minimally viable" actually looks like. A single well-placed sentence that describes something necessary, but non-idiomatic or non-obvious, that takes 10sec to write can save you hours or days of tracing and debugging at that crucial moment right before you're onboarding a pilot customer or pitching investors.

Very often the bar of "minimally" is set way, way too low and the trope of that just being "startup life" is usually just laying problems at the feet of engineering that don't belong there by justifying, sometimes horrifying, technical debt which are really process and management failures, and later eventually scapegoating engineering for it.


I agree with you in theory, but from experience it is extremely hard to achieve when you're actually in that position.

Unlike what you say, it does take much longer to write great code when you're trying new things every day. Every time you try something new, by definition you're building something that you hadn't planned when you first started. Which means if you really wanted to be perfect you would end up refactoring every day. This is not a small effort in total.

You could say you should have prepared for that scenario from the beginning, but I don't think it's a good idea to believe that you can predict the future especially when you're building something that hasn't existed before.


In my experience, when you're in the "invention phase" is when it's most critical and beneficial to have and demand more discipline around your output artifacts. Precisely because keeping context and content around your fast iterative decision-making and implementation phases is critical to avoid wasting time on duplicated effort or needlessly repeating mistakes over and over again.

Literally every situation I've ever seen in 20 years across all manner of marketplaces and every imaginable stage of development has had the immediate faux-gains made from ignoring spending 10%-20% more time now (to produce something at least coherent, commented, and composable) ultimately overwhelmed by the soon-to-be-immediate costs of having to live with the consequences of the dubious "shortcuts" taken.

I agree with you that it's extremely hard to achieve maintaining that kind of discipline when under the gun, but that's a people management and expectations problem, not an engineering effort problem. I'm 100% certain that in any given window of time, no matter how crunchy that crunchtime is, that if my team spends 10-15 out of every 100 man-hours producing a higher-quality "minimally acceptable" artifact that they'll both materially accomplish more in that first 100 hours than if they'd spent all of it hacking together a barely working dumpster-fire in the same window of time, and that it will pay glaringly obvious dividends for their productivity in the next 100 hours after, and the next after that, and so on.

Lacking that discipline is neither a necessary condition nor a near or long-term benefit to a prototyping or product development cycle. It's purely a function of leadership, expectations management, and experience.

(I'm even giving the benefit of the doubt that what's being done "hasn't existed before", which is overwhelmingly not the case. More typically it's different people and different teams just painfully learning the same hard lessons in parallel, often because there's an assumption/delusion what's being done is a legitimate pioneering effort instead of the probable reality that it's really a minor variation on something else that already has well-established successful models available to learn from and mimic.)


> I'm 100% certain that in any given window of time, no matter how crunchy that crunchtime is, that if my team spends 10-15 out of every 100 man-hours producing a higher-quality "minimally acceptable" artifact that they'll both materially accomplish more in that first 100 hours than if they'd spent all of it hacking together a barely working dumpster-fire in the same window of time, and that it will pay glaringly obvious dividends for their productivity in the next 100 hours after, and the next after that, and so on

The problems is at the very beginning, all (or most) of that code will be thrown away because you really don't know what works. How many cycles of wasted extra 20% effort would it take for you to change your mind?


You missed the point. In that same 100 hours, spending 15-20 hours applying best-practices has universally resulted in actually getting more out of that 100 block. The 100 hours spent on the spaghetti mess has always produced less, provided fewer insights, and often exactly nothing that can be soundly iterated on.

This is because the friction and pain of technical debt doesn't wait until the end of the 100 hours to create problems. It's biting you every second of every minute of every hour and dragging down your productivity and velocity as you're just plowing forward soaking the noodles.


It really depends on what one's standard of "shitty code" is, but in my case my shitty code does perfectly fine until it reaches significant scale (as in millions of users). My thought process is: when I reach millions of users i better hire someone to help me with coding anyway and it's not too late to take care of it then.

On the other hand, I have also made mistakes of over-engineering to make the code maintainable. What happens then is I can't really detach myself from the original idea no matter how much evidence there is to show me that it's not working, since I've invested so much in the technology. In that case the time I spent to make the code maintainable becomes a mental liability rather than an asset.

I don't doubt you are a great developer and have great experience, and I don't doubt that spending 15-20 hours out of 100 hours when you've reached product market fit is something you should do, but I can say from my own experience and experiences from other successful/failed entrepreneurs I've met that it's absolutely a bad idea to do so when you're just 1-3 guys bootstrapping. In the beginning the only thing that matters is getting people to actually use your product, so you should spend 100% of your resources on getting there, not on optimizing maintainability at all.

If you have actually been in that situation (a founder who's built his own product from scratch by bootstrapping) and still think it's best to spend 15-20 hours out of 100 hours working on maintainability, please let me know. I personally don't know any founder who thinks that way.


I'm also not sure why we're narrowing down that 15-20 hours focused entirely on maintainability.

It can also be 15-20 hours making sure you're following language/tool idioms and conventions so that you can iterate quickly instead of fighting against yourself and your tools being used incorrectly.

It can be about making sure you aren't making leaky abstractions so that when you suddenly realize you need to make big or small changes to address market needs you're able to do it quickly and efficiently rather than having to rewrite a big brittle pile of al dente noodles with every shift and change you're presented with.

It can be about making sure that the little idiosyncratic things you had to do have some explanation around them so when your fellow boot-strapper has to come in and make a change or understand something you worked on, but you're home sick as a dog because your kid sneezed straight down your throat, your fellow boot-strapper can understand what you did, why you did it, and can carry forward with that change without depending on your availability holding everything up, and because you didn't make a bunch of leaky abstractions on top of it all they can make that change without breaking 99% of the rest of the prototype/product.

It can be about making sure a sufficient amount of what you're doing is done so with composability in mind, so that when you figure out that thing you did that you ended up throwing away, but now turns out would be supremely useful to meet current demands, both still exists someplace and can actually be used without having to spend 10x as long rewriting it.

Having great scaffolding, best-practices models, and good process makes everything go faster... not slower. Learning how to properly use your VCS is an enabler, not a drain, but it takes time away from just hacking away on something. Setting up a CI pipeline is an enabler, not a drain, but it takes time away from just hacking away on something. Building scaffolding tools that can just generate for you the first 20% of a project template are an enabler, not a drain, but it takes time away from just hacking away on something. Learning your tracing and debugging tools is a massive enabler when trying to understand and prototype something, but it takes time away from just hacking away on something.

I should add the caveat that all those things are enablers... only if you do them well... otherwise they're a giant drag because they're hurting you, not helping you, and they become a part of your technical debt.

Do you spend literally no time on any of that because it takes time away from hacking? Do you think having none of those enablers in-place is helping you all work faster?


Writing bad code is often the first step in writing novel good code. It's the 'does this library do what it says it does' kind of exploration. Can I do this in HTML 5?

First principles works fine on CRUD apps, less so new concepts. If you going to toss 90% of your code as useless then an extra 20% is simply wasted time. I am going to try the same thing in 5 library's to see which one I like directly means tossing 80% of that effort away.


I completely disagree. It's much less useful for "CRUD apps" because "CRUD apps" are an obvious and known commodity. New and novel concepts are much less broadly intuitive, and are precisely the kinds of things that demand extra rigor even if you're going to end up throwing it all away. None of it is wasted effort when you're considering, and admitting, that a big chunk of the work is exploration.

Keeping documented feature branches of all those versions you tried before landing on one is an absolutely critical result of any exploration process, as well as some aggregation of information and context in a central place that points to and describes those various versions and your final outcome. This helps you keep from repeating mistakes, it provides context to you and others in the future for why things are the way they are, and it provides an archive of workable variations that you can quickly migrate to in the event your "final answer" turns out to be wrong.

I haven't written a "CRUD app" since 2008, and since 2010 nearly all my work and the work of my teams has been in greenfield development (in some cases literally inventing new never before seen things). I see it time and time again. We move much, much slower as individuals and as a group when we take shortcuts under the auspices of time-pressure and don't take the time to be thorough even in prototyping.

So we spend the same 100 hours hacking away to create a dumpster filled with spaghetti as fast as possible or we spend 100 hours designing, building, implementing with intention, and every. single. time.... the latter process actually gets us further toward prototype, product, and/or market validation goals. The former process ensures that we have 0% of anything that is useful no matter what the results of the exercise are. The latter process ensures that we can legitimately iterate on what we've done based on the results of the exercise.

Iterating on a trash heap is extremely expensive, not a way to run faster.


If your working from an existing code base then it's not green field development. It literally means development that lacks constraints imposed by prior work. For example sticking with an exiting language, OS, compiler, etc. is a constraint.

So, clearly we are talking about something different.


I can't tell what you're actually replying to in my comment.

I'm talking specifically about using a different language, a different OS, a different deployment pipeline, new testing methods, new security constructs, new infrastructure orchestration methods, entirely new architectural paradigms, etc. etc. etc.

In fact using that as an example. Say it's a new programming language. You will absolutely waste more time dealing with the pain of doing things epically wrong, fighting with your tools, doing really crappy kinds of log-print debugging, etc. than you would taking 10-20% of your time up front learning some normal idioms and conventions for the language and learning to use richer tracing and debug tools to find problems and to instrument your prototypes.

Just because you sat down and started coding and debugging without having any idea what you were doing, but at least you were "doing", emphatically does not mean you are getting to a goal of minimally viable test, or prototype, or product faster. It just means you started typing sooner.


I still think we are talking about different things. If you want to know how much battery charge is lost when a cellphone get's a GPS lock that's a very focused requirement. Trying to also make your test code useful in the long term is vast and unnecessary constraint. It's the stage where you want to understand the problem.

"Wait, I need a Mac to write iOS code, ouch."

Ideally you should be producing far more research than code. Your talking logging, and I am talking the "H" part of "Hello, world."

PS: If you can get away without writing code so much the better.


This isn't just in code these days. I'd call it the Ikea phenomenon. Things that look functional and act functional in a constrained set of circumstances, and are deemed as good enough. The cost is often proportionally low, too. (ever tried to move a fully loaded Ikea dresser?)

But, as you've pointed out, the real work is in maintenance, durability, and quality, not veneers.


Arbitrarily relocatable furniture tends to be made of steel and weigh 300 lbs. I think the extended metaphor works for software as well.


That's a great metaphor...


There are definitely organizations where code that works but doesn’t appropriately account for those uncontrolled scenarios will earn you nothing more than a bad reputation. I’ve seen this happen, and I’ve seen their name slowly drift to the bottom of the list when it comes time for a raise or promotion.

I’m generally surprised at how little interest VCs have in this, though. I think a sizable percentage of their misses could’ve been avoided altogether if they had just audited their prototype, or whatever they’re claiming qualifies as their MVP.

People generally don't understand how many different scenarios software needs to account for, and how quickly those inputs can exponentiate into a very big and complex task. Creating the appearance of functional software takes a very small fraction of the time and expertise that it takes to actually make something production-ready, but I don’t think a lot of non-developers understand just how big that gap is.


Software VC's don't have to care about code being crap that only demonstrates well in a particular set of circumstances for exactly the same reason that mining VC's don't have to care that the same fraction of gold is found in every cubic meter of the company's mining site as in the handful of lucky drill samples.

Those VC's can pump and dump: promote the stock stock, then dump it before it dives.

The people that have to care are the company officers who want to grow a successful business. If those people are cynical, then the venture is probably doomed.


"version 1.0 was a rickety mess, all of those guys left, and anyone would be lucky to get the thing working at all, much less in a way that does not introduce any new bugs."

No user cares about any of this. Fix it. Don't try to rationalize shipping garbage.


> The “real work” is in figuring out how to move forward from something like that, and lots of stress comes from trying to do so without breaking anything.

I'm a lot more impressed with the evolution of the human species than how doctors have been able to maintain those running systems.

You want to design a human from the ground up using proper infrastructure that doesn't have ailments or diseases go ahead. you'll be lucky if you end up with a fruit fly.


The human body is the result of 1,000,000,000,000,000,000,000+ organisms dying before or after reproducing. That's evolution. It has absolutely nothing to do with the "maintenance" that doctors perform on the human body, especially when doctors are loathed to lose even a single organism.

I'm not even sure why I responded to your comment, since it was so random and unrelated to the rest of the thread.


If you think it's so unrelated, then maybe you didn't understand what he was trying to say?

Doctors are necessary and important and deserve to be paid well, since they deal with life. But that can be done by anyone really. I'm not saying any idiot can become a doctor, it's that most doctors do what any other doctor can do.

I'm not interested in those things either. I would rather work on something that can change how people live their lives--which one could arguably say is close to evolution.


It means you didn't understand my comment.

>The human body is the result of 1,000,000,000,000,000,000,000+ organisms dying before or after reproducing.

I guess I have to spell it out. "Bad, unmaintainable code" is the result of 1,000,000,000,000,000,000,000+ random requests being shoehorned in without proper design of the whole thing, but just on the basis of whether it still works or meets some last-minute requirement. I hope this makes my analogy clearer. (I realize the number is an exaggeration, I am referring to the process involved in "evolving" code, rather than "properly designing" it.)


I see things a bit differently. Some domains are well understood and it's relatively easy to write perfect code (e.g. mathematical functions, list manipulating functions, etc.), while in others it may be much harder (the file uploading example he mentions). To me a good aim of writing code is to move bits and pieces from the latter camp to the former, so that more and more of your codebase is well understood (and perfect!). That usually involves finding the mathematical meaning of the abstractions you're using. (Shameless plug- I wrote a little blog post on the subject: http://asivitz.com/posts/programming_in_haskell)


> Some domains are well understood and it's relatively easy to write perfect code (e.g. mathematical functions, list manipulating functions, etc.),

Those are good examples of things that seem easy to write perfectly until you change some requirements that seemed like they would never change.

Take your list manipulating library and throw in a requirement that all node allocations have to be pooled, or that it needs to work in a multi-threaded environment. All of a sudden you have to rewrite them all. (for example, see: STL vs EASTL)

Someone already mentioned floating point, but math can definitely get tricky if you have to be exact. An example from my own past experience - it's really easy to write code to cut a 2d triangle using an arbitrary line as the cutting axis. It almost always works, and it usually works flawlessly. Until you get close to degenerate cases, skinny triangles or overlapping vertices. Throw in a requirement that degenerate cases must resolve correctly, and suddenly you have to start over and use a different representation of numbers. Make it a little more general to handle 4+ sided polygons, and before you know it, you spend a decade trying to get it right. (for example: http://www.complex-a5.ru/polyboolean/comp.html)


Certainly the requirements change and we modify the foundations of our systems all the time to reflect this. However, there are those useful abstractions which seems to survive a certain amount of time so as to allow us to study them, and then there are the yearly hypes.

That list manipulating library might still be based on concepts which are understandable and communicable to their users, allowing us to reason at different levels on their internal and external behavior. This in turn allows us to say, at a certain abstraction level: this code is complete. Thinking of Occam we might continue: given our preconditions, our abstractions, our algebra: this is the simplest.

It is fair to believe that 'perfect' means that all unnecessary complexity is removed. But, I would like to argue againts its usage, as the word carries too much emotional baggage in this context.


Actually, I'd say writing perfect code in the mathematical domain is extremely hard - floating point types are a minefield and infinite-precision arithmetic is a minefield also if you consider the possibilities for running-times exploding. For example, code for the mean of a list of numbers is a hard problem when looked at in its full generality [1] - and that's using forgiving definition of "perfect" - alway correct and terminating, running at a reasonable speed.

[1] http://hypothesis.works/articles/calculating-the-mean/


Great link! But also, the mean is itself an imperfect summary of some distributions. Any single-number summary will be misleading in some cases.


Anscombe's quartet is a great illustration of that.

https://en.wikipedia.org/wiki/Anscombe%27s_quartet


That's really cool! Btw, I'd love to drop that line into casual conversation some time. "Anscombe's quartet" has a nice flow to it.


Floating point isn't really the baseline for great mathematical structure, it's just a pervasive hardware standard.


I get really twitchy when I read statements like yours. But I am completely unreasonable in my definition of perfect -- it is really beyond extreme. Perfect is an absolute, so by definition it is unobtainable. Perfect means you have gone beyond the notion of executes correctly every single time, but MUST also be aesthetically perfect. Perfect names, perfect formatting, perfect location etc. No one can meet my criteria for perfect and never will (I can't).

In that sense, there is no perfect code, nor will there ever be. There will only EVER be good enough.


You’ve created a concept and attached the word “perfect" to it. Your concept is different than the shared definition. This leads to disagreements, misunderstandings, and (as you said) twitching.

The purpose of words is to share information. If you have write a paragraph describing your definition when you use a word, use a different one.


Not really. When I look up the definition, it is 'completely free of faults or defect'. I don't think I've strayed past that meaning.


yes you have. you've moved it into the subjective by requiring pleasing aesthetics


Perfection is subjective because flaws are subjective. Aesthetics are part of the code. If a piece of code is functionally perfect but ugly or inelegant by some measure, then it is imperfect by that measure.


right, flaws are subjective. I'll try that one on my QA team next time they file a bug report ;)


There is more than one class of flaw. Some are objective, ie. fail to meet spec. Some are subjective, typically where the spec doesn't cover.

If software functions correctly but is unnecessarily hard to use due to (eg.) bad layout, would you still call it perfect?


>has tabs

>literally trash


Why have the word perfect then?


For the same reason we have the word infinity.


But infinity does exist...?


Infinity exists the same way that perfection exists. Abstractly.


idealism?


The real problem with what you've said, is that "finding the mathematical meaning of the abstractions you're using" is an incredibly hard processes, and almost nobody succeeds at it. And even then, you have to map these abstractions to hardware in a way that doesn't leak. And the only way to do that is to implement the machine itself (or a special hard-to-find-subset of it). Those are the only abstractions capable of leak-free modeling.


I had a manager that never used "finished" or "done". He always said "where it needs to be right now".


I have the start of a lightning talk on something similar, called "Why Software Sucks". The idea is that we only make software good enough to get by - we will always choose features over robustness if we possibly can. Every project ends when it either fails, or hits feature-complete. No one sits around polishing the pile of bugs and bad decisions that is a feature-complete app unless they have to - much more fun to start something new!


It seems like most consumer apps incur a crazy amount of technical debt while largely paying people in equity, then if they take off raise a bunch of money to pay other people in cash to sort it all out... Pretty good system actually.


Enterprise apps are no different. Lean startup is doing better in the enterprise than it is in the "startup" world.

Early on, every app is laden with technical debt. They barely work. When "barely works" becomes "doesn't work", we fix them just enough so they barely work again, and start piling on more features and functionality. We may build some scaffolding and do some refactoring to prevent dropping into doesn't-work-land, but that's not the same as robust! So software - all software - hovers around that barely-works/doesn't-work borderline.


Management doesn't invest in things they don't understand. They rarely understand tech (even if they are technical people, tech is misunderstood).

If you care about the outcome of a craft, you'll get craftsmen, and they'll produce things and you'll pick the best ones.

Imagine if a chef had to serve the first try at a new dish, if a photographer had to use his first shot, if a director had to use the first take, if a visual designer had to use the first concept. These processes are easier for people to understand because a layman can tell if two flavors didn't combine well, if a model blinked, or if something looks off. People (even technical ones) can't experience technical problems immediately or directly in the same way. Other fields solve for this with engineering, but ours rarely is allowed.

We often ship the first architecture that gets brainstormed, and the first implementation that works. It's never "We implemented the MVP using patterns/frameworks A, B, C, D, here's a table of the strengths and weaknesses as we see them, statistical summary of the time to complete each feature, and of defect rates. This next slide is a plot of the relationship between request latency, capacity, and estimated infrastructure cost of each implementation". It's rare to even have refactors scheduled. Just cleaning up your shit is "wasting time", never mind how long it takes to ship new features given how cluttered things are.

Sure, engineering is costly, so is not engineering - you just aren't projecting or measuring the latter costs. It's all trade-offs, if you aren't considering expected values of different options, with numbers, then you haven't optimized, you just rolled some dice. Sure, of course it's always worked, everything always have worked for the survivors. There's a big difference between what's unforeseeable and what has merely been left unforeseen.


If my house is on fire, I don't want the fire department to wait for the better firetruck for the job to show up.

Time to market and allocation of resources are serious issues for management. They may not understand the tech, but I've found engineers rarely understand the business context, either. "Good enough" is good enough. And we've seen process get more and more focused on this since the late '90s... Agile, Lean, etc. Get a product out the door, even if it's buggy and flawed. Get feedback from actual customers. Find out what's really a problem and what we only think will be a problem.


Right, there's a whole spectrum between "a prototype worked once on Bob's laptop, how do we get that live" and "We invented 12 new declarative DSLs that encompass our problem domain, and formally proved their implementations correct."

Engineering isn't the latter end of that, it's precisely just the process of figuring out the trade-offs, the diligence of figuring out where on that spectrum (highly-dimensional continuum really) of process your project should be to get the greatest chances of success. Of course it includes time to market and concepts of "good enough". But it's math and planning and risk accounting and not the cargo culting, handwaving, business platitudes, salesmanship and politics of authority that get utilized in the majority of cases I've seen for making those decisions. These result in monuments to compromise -- the output of arguments between "captain cowboy" and "doctor diligent" (neither of which correctly account for real-world incidence of risk or what that costs the organization which employs them), arguments of engineering without any engineering being done.

Engineering isn't disconnected navel-gazing perfectionism, it's how you prevent it. Engineering is both why we don't live in Kowloon Walled cities, and why most places have water and plumbing and transportation despite a wide variety of obstacles and economic realities that would preclude "ideal" solutions.


"Get feedback from actual customers" is, more often than not, dishonest rationalization. If you never use customer feedback in order to increase product quality (instead of just growing the feature list), "good enough" is just a code word for brushing all dirt under the carpet.

The fact is: software is a lemon market and teams/companies are rewarded accordingly.


Instead of the 80-20 split at some tech cos, perhaps there should be a 70-20-10 split, of main project-side project-refactoring/cleanup. I would wager Google for example would be almost immeasurably better off with that approach.


"Early on, every app is laden with technical debt. They barely work. When "barely works" becomes "doesn't work", we fix them just enough so they barely work again, and start piling on more features and functionality."

This is the biggest issue, software is seen as something that is built, not something that has to be maintained, improved and extended.

It's amazing how companies can pour tens of millions into the build step and lose that investment by not maintaining and improving it.


They don't lose the investment, though. They put it in back-burner mode where very little effort is expended, re-org the team to work on something else, and the software continues to generate revenue. It's a pattern I've seen a lot.

Another one of those real-world insights I've had is a variant of Conway's Law... the software built by a team reflects that team's communication structures, but if you dissolve the team and hand maintenance to someone else, they'll never make sense of the original code, because it doesn't match the communication structure of the new team.


Every day it spends on the back burner it gets less and less maintainable though. The libraries and technology it's built with get outdated, the systems it has to interact with keep evolving and the knowledge about the system disappears.

Then when they need to update it (say for a new version of IE) they can't find anyone competent to work on it because no one competent wants to work on such an outdated POS. A simple bug fix or added may involve upgrading a tonne of dependencies and code changes in the project is effectively dead.


Yeah I've learned this the hard way as being one of those people to "sort it all out". I now make a point of reviewing a company's codebase before joining. I refuse to join a startup with a shitty codebase that falls apart after to touching it once.


Interesting. Are you a consultant/contractor or do you typically work as an employee? I can't imagine a company would just open up their code base to someone (even under NDA) unless they were fairly desperate.


Consumer apps tend to have a short expected lifetime so technical debt doesn't tend to get to compound to the point of taking more time to maintain than it would have cost to do it properly in the first place.

By that point if it's worth sorting out it's already paid for the people to clean it up.


It might not be "perfect code", but I find it fascinating what certain institutions do to create bug-free and easily testable code, institutions which are involved in work where small errors can quite literally ruin billions of dollars of investment.

For example Coding Standards from NASA's Jet Propulsion Laboratory (JPL) for C[1] and Java[2]

This will also give you an idea of just how tedious it is to do so

I am not even sure if enforcing those strict standards leaves a Turing-complete language

[1] http://lars-lab.jpl.nasa.gov/JPL_Coding_Standard_C.pdf

[2] http://lars-lab.jpl.nasa.gov/JPL_Coding_Standard_Java.pdf


That's a good point. Safety critical systems commonly require loops to have maximum iteration counters and also forbid recursion.

So what's left probably isn't Turing complete.


From [2] above:

> important general differences from the JPL institutional C coding standard for flight software references (JPL-C-STD) are: (1) the Java standard allows dynamic memory allocation (object creation) after initialization, (2) the Java standard allows recursion, and (3) does not require loop bounds to be statically verifiable. Apart from these differences most other differences are due to the different nature of the two languages.

That said, the standard in question is explicitly only intended for ground-based systems.


How can it not be Turing complete? You have the option of doing those things, even if by accident but you avoid it as much as you can. Turing complete does not mean it runs forever or has a mind of its own. I'm not necessarily trying to be combative and am not the most well versed in computational theory, but this statement seems off.


Perfect is subjective, but "correct" not so much. And when you get sufficiently skilled, you realize correctness can, for sufficiently complex software, only come together with the more subjective qualities, since it's hard to make "ugly" code correct.

"Piece of [bleep]! I know you [bleeping] work! WHY WON'T YOU [BLEEPING] WORK, MOTHER[BLEEPER]?!"

With that kind of attitude (trial-and-error?) you will never be able to consistently produce correct code, let alone perfect whatever that is. Though I think it's not typically the attitude that is the problem, but rather the underlying lack of skill/intelligence which makes people resort to such approaches.


> With that kind of attitude (trial-and-error?) you will never be able to consistently produce correct code, let alone perfect whatever that is.

Sometimes software development process can be very frustrating and I fully understand the feeling when you're overwhelmed and confused and feel frustrated because you don't know how to approach the problem. Working with black boxes usually causes the feeling in me, or having an elusive bug that could come from literally anywhere from a huge system and having to debug it with seemingly random reproduction steps, that always gets me.


Code quality can only ever be as good as your understanding of the problem domain. A more accurate statement would be "Problem domains are rarely perfectly understood."


IMHO This should be more highly rated. It seems that we write code to understand the problem domain, and only by writing code do we truly understand the problem. This is - again IMHO - why iterative techniques work so much better than waterfall-style techniques.


It's scary sometimes that previous consultants (I'm a consulting developer) may have gotten an application "working", but have incurred so much technical debt it is nearly impossible to make changes. They've escaped and taken the biggest chunk of money leaving something built on a foundation of toothpicks. Thus the project becomes a ticking time bomb and requires full-time programmer maintenance. You wonder if the next feature request will become the impetus for the collapse- Then you turn into Robert Gates: "I don't know where. I don't know when. But something bad is going to happen!!!"


I'll copy a Bruce Schneier's quote and say "Software is a process, not a product."

It's no surprise that there's no such thing as perfect code because code is mere reflection of a real-life requirement, which is always somewhat inconsistent and ever changing. Sticking to one static point is, well, pointless.


I think today it's more useful to think of most software as a living organism interacting with it's environment - it's constantly evolving and adapting to external changes (user needs, dependencies, 3rd party services etc.).


I still haven't learned this lesson well enough. Code I think is near perfect always turns imperfect over time. The main corollary for me is that I can spend every waking second of my life making my code better, and I'll never be happy. I want to better figure out how to enjoy 'good enough'.


Instead, I might suggest that you enjoy making your code better and to be happy with the process rather than the result. Then every crappy piece of code is an opportunity rather than a curse.

Improving code beyond the point where it gives material benefit is not really a problem except for the fact that it deprives you of the opportunity to work on crappier code. Especially when working with new code, I find it useful to concentrate on process rather than end product. If you write tests like X and refactor like Y, how does it affect the result?

In some ways this is really freeing because you get over the fear of writing crappy code. If it's crappy, you just get to improve it. It also allows you to explore ideas and ways of doing things without worrying so much about the result. Concentrating on producing good code can often (in my experience) lead to achieving a local maxima but not does not allow you to kick out of it to achieve something better.


I was getting closer to that for many years -- enjoying the process -- until I decided to try and start a software business. All of a sudden, there's a huge downside to improving code without having a clear material benefit.

I'm with you on losing the fear of writing crappy code being freeing. At some point I realized that everyone's code including mine is crappy code, and there's no way around writing crappy code or introducing bugs. Realizing that did make me a better coder and a better manager. It let me be a bit more humble as well as empathetic to others, and led me to think more about the importance of testing.


I wonder if this attitude will change once things like coq become more accessible.


Agreed. I find that when working in some functional systems like haskell/clojure or even functional-style javascript, I do occasionally produce code of reasonable complexity that I call pretty much perfect.


I think the problem is most people are not capable of proving code is correct even informally (and writing correct code, which can only happen if you're writing it while proving correctness in your mind). Using formal verification seems to me like something additional you have to do after convincing yourself of correctness (convincing the computer).


I don't know if we will ever have tools that are that safe and as easy to use as the alternatives.


It depends. People were reluctant to give up a lot of bad coding practices (goto, variable scoping, writing in efficient assembly rather than a high-level language, mutable global state) until the better way had better tools or was taught better.

Edit: Maybe one day people will look back and say, "you shipped code without formal correctness proofs? Were you all high?"


There is code that pushes the boundaries of the human mind, complex systems that cannot have any additional complexity if they are to succeed.

And there's other code that pushes against physical or economic limits: processor speed, memory available, network latency. In those systems, writing in an inefficient, high level language is just as wrong as writing the first type of code in a low level one.

Fast, complex, difficult to fully verify code is not bad coding practice any more than a rocket is a worse vehicle than a Volvo because it explodes more. Volvos can't go to space.


But one day our skill in "controlling explosions" (bad logic) may be as good as our skill at doing so in a Volvo, where we can get the power, with the efficiency and predictability.


News at 11: winging it is less work than being careful!

(Of course, the situation gets fuzzier when you must account for other factors: work/dollars/lives lost due to bugs in dependencies, the integral of maintenance costs over time, etc.)


Right. The tools don't need to be easier than competitors, they just need to be easy enough and provide enough benefit that doing it another way isn't worth it.


It's your job as a worker not to cut corners. It's their job as a manager to fire incompetence. Needless to say I think everyone is failing to do their job.


Not sure how this addresses changing requirements..


/remindme in 50 years


The obligatory link to "Everything is Broken": https://medium.com/message/everything-is-broken-81e5f33a24e1


This reminds me or a reaction I had to a very old internet war about PHP and Rails

https://sivers.org/rails2php

"...The entire music distribution world had changed, and we were still working on the same goddamn rewrite. I said fuckit, and we abandoned the Rails rewrite."

It made a bunch of people mad, because (I think) they were reading "Rails couldn't X, and Rails couldn't Y, and Rails couldn't Z, but PHP could totally Y Z and Y, so I used PHP and it was a big success".

But having been in this situation, I think what he was saying was "I couldn't X, and I couldn't Y, and I couldn't Z, so I said fuckit and did it the way I know how..."

It has nothing to do with rails or PHP, or node, django, react. Programmers who have never finally said "fuckit" and abandoned "the true and right way" are programmers who haven't released.

Not saying you should stick with ancient technology, but "good enough" is an inevitable ingredient in any software that ends up being useful to someone.


I think this is rambling about things that are largely unimportant (spaces or tabs). At least that's what his "Perfect Code" definition seems to be to me.

The important part is the flipside, not the presentation: Given a problem domain and a context there is an optimal or "best" solution or there is not, and we can get at the answer for most cases.

So code can be perfect if we understand perfect to mean optimal. If we understand it to mean spaces/tabs probably not, though given an input and desired output we can even prove some of these as well (for instance, underscored function names are more quickly processed than CamelCase : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.158....).


I didn't read the whole paper you linked, but doesn't this sentence in the abstract contradict your last sentence?

>Results indicate that camel casing leads to higher accuracy among all subjects regardless of training, and those trained in camel casing are able to recognize identifiers in the camel case style faster than identifiers in the underscore style.


yes it appears that way without reading the paper. Basically the main factor to me is speed which has a huge differential for untrained subjects "The model finds that identifiers written in the camel case style took 0.42 seconds more time." Doing the math you see how much wasted time that is. So I should have been more specific in the parameters. Your quote is perfectly valid as well.


It's important to know what kind of code you're working on before you start setting low standards.

Some code is perfect, but it's only perfect because it was define with extreme mathematical precision (Ex: Haskell Prelude code). The value of this type of code is in direct proportioned to how often it's reused. Often the specs are based on timeless and well defined ideas, and the implementation crystallizes.

On the other end of the spectrum is loosely defined ad hoc code. The specification behaves more like a living thing and the code often tries to adapt to the moving specification over time. This type of code tends to bring a lot of value to the business and often relies on the foundations laid by perfectionist code.


I stopped looking for perfect code after I saw the fail man page, and saw that the one on my computer was at version 4.something.

Yes, the Haskell's Prelude is perfect when matched with an (artificial) specification that requires exactly it. Yet, on practice I don't think I ever saw any Haskeller that does not want to add or remove something from it. Partial functions are almost unanimously disliked, ditto for String functions; not to say about the universal complaints in how it is badly generalized.


I wouldn't call Haskell's prelude 'perfect'. For a few examples of imperfect things see https://downloads.haskell.org/~ghc/master/users-guide/bugs.h... (Find 'prelude' in this page, has several appearances).


I didn't mean to imply all the prelude code was perfect. What I meant to imply was some parts are insofar as the specification is complete and specific enough that it yields one solution. An extreme example would be the id function.


I hope this attitude is used sparingly and in cases where either the problem is supremely hard or where time is exceedingly short.


You are missing the point. "Good enough" depends entirely on the context. In some contexts, the bar for "good enough" will necessarily be quite high, while in other context it won't matter as much. There is no perfect code -- because no two humans could agree on such a definition.

Sure, people may disagree on what is good enough. But hopefully that disagreement will be easier to resolve than one about what the perfect solution might be.


My only concern is that this line of thought can be used as an excuse to stop trying. As long as we agree that one should strive for perfection given the constraints at hand (money / time / problem difficulty / experience level etc) I dont have any objection about the idea.

On a related note, I feel another analogous statement that I now see used dangerously is "Premature optimization is the root of all evil". Yes optimizing prematurely can be paralyzing but the other extreme where one refuses to do back of the envelope calculations and wilfully writes code that will be slow is damaging as well.


I can appreciate that. I have some perfectionist tendencies myself. I think it is a tricky balance that must we must walk. I like your idea of "as perfect as our constraints allow".

Another important consideration is the level at which the quality exceeds the needs of current and future users of the software. Past this point (which might be impossible to determine or fully agree upon), one must acknowledge they continue to iterate and polish for purely personal reasons. This can be fine in certain situations, but it may be a detriment in professional endeavors.

And of course, like always, one must consider opportunity costs.


That's a good attitude to aspire to, but, in real software development, very little code survives contact with users unscathed. If you're writing a C compiler or something with a well-defined spec, then, sure, you can theoretically write perfect software. But, even your basic CRUD web site is unlikely to ever reach "perfect" status.

Edit: For example, the HN code was obviously not "perfect," since we just got several new features. :)


Time is always exceedingly short. If doctors were required to perform appendicectomies in ~30min, but they were not held accountable whether the pacient lived or died, you would see a market dominated by some pretty barbaric hospitals (think no sanitazion/anesthetics)... which people would just accept because at least they would be better than the average back-alley butcher.


Tooling is getting better. Ever looked at what Amazon does with formal methods to "prove correctness" for some of their systems?

Of course you can argue that "good enough" for them is "damn near perfect".

http://cacm.acm.org/magazines/2015/4/184701-how-amazon-web-s...


got a link to a non-paywalled source?


I [attended][1] a great lecture by [Alex Martelli][2] back in EuroPython 2013 titled "'Good Enough' is Good Enough!"

From the [summary][3]:

> Our culture’s default assumption is that everybody should always be striving for perfection – settling for anything less is seen as a regrettable compromise. This is wrong in most software development situations: focus instead on keeping the software simple, just “good enough”, launch it early, and iteratively improve, enhance, and re-factor it. This is how software success is achieved!

[1]: http://simongriffee.com/notebook/europython-2013-notes#goode... "My notes. Biggest takeaway was to use 'perfect' as a verb rather than adjective."

[2]: http://www.aleax.it/ "Alex's homepage."

[3]: https://ep2013.europython.eu/conference/talks/good-enough-is... "The lecture video is also on the page."


I think that might be paraphrasing implications of Gödel's Incompleteness Theorem.

Non-trivial codes can either be consistent or complete, but not both (within a system of abstraction).

This explains Joel's "Leaky Abstractions" essay as well as the conjecture that code cannot be perfect.

However, "good enough" thinking is often just sloppy thinking. Even if mathematics has boundaries, it doesn't mean you just shrug and say "a nod is as good as a wink".

Craftsmanship and quality are perceived in terms of utility over time. "Good enough" often doesn't meet that bar.

Engineers overbuild to support capacity and plan for the worst. "Good enough" tends to under-build at or below capacity and hope for the best.

"Good enough" is a just reaction to over-engineering (the adding of unnecessary fixtures and 'perfection'), so the warning has value and there are some engineers who use it in this way.

But others often mistake over-building for over-engineering and cripple their organizations and products by seeking "good enough".


Haha, there's some code of mine I've been regularly improving for 30 years now. Even versions I thought had attained perfection I rewrote later when I realized it was a POS.

The only software that is ever "done" is code that has been abandoned and is no longer in use.


I try to follow this mantra from Kent Beck: "Make it work. Make it right. Make it fast."

I find that most of the development environments that I have worked in, the focus usually ends up being on getting new features completed in order to help facilitate the sales department to make sales. It's a never-ending cycle that will gradually accumulate technical debt such that pushing new features out the door will eventually come to a halt.

Yes, I agree that code can never truly be perfect. However, as developers we should always strive to make our code better.


We should more than ever strive for perfect code. Perfect as defined by all stakeholders in a software. There is really no point in striving for anything less. Sure you maybe forced to stop at one point (time and money beeing the limiting factors) but if you can, you should always continue making software better!

"Good enough" is not good enough as a target. It's uninspiring and unsatisfying.


I think the definition of perfect code depends on whether the computer is the end or the means. If it's the end, then there may never be perfection.


Code is never "good enough", code is only ever "less in need of fixing than other code."


Correction, code is at most "good enough".

I've seen (and written) plenty of code that was less than good enough.


...and, problems change even if the solution doesn't. At this point I only aim to build things that will durable and sensible for about four years. After that, some other change seems to come along that invalidates the original code, no matter how pretty it was.


I'd love to see what YCs guidance on code quality is. My guess is "ship it if you get users".


"If the building isn't on fire you're not iterating fast enough"



Right on point. Furthermore, it just lead me to realisation that achieving software perfection is like reaching the speed of light: the closer you get to it, the heavier you are, and more effort is needed to go further.


... like you could create anything and call it perfect. There is no perfect.


Does the work of Charles Simonyi and intentional programming change this?


Unless it is formally verified, then it is really perfect. Not sure why we shouldn't strive towards that final state of evolution. Conceding the game at "good enough" is a cop out.


Formal verification depends on frozen requirements. These are quite rare in practice.


And most of the time not even good enough.


Insanity is doing the same thing over and over again and expecting a different result.


ha the Dick and Bob bit is pretty darn funny




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: