Hacker News new | past | comments | ask | show | jobs | submit login

One of the depressing things about code is that it is astoundingly easy to build something that does amazing things in controlled circumstances but is otherwise practically garbage. Promotions, venture capital, and all other manner of rewards can result from said garbage. The “real work” is in figuring out how to move forward from something like that, and lots of stress comes from trying to do so without breaking anything. There is little reward in maintenance.

“Good enough” is really the type of attitude that leads to this unfair balance of effort.

Unfortunately, we live in a world where criticism is levied very heavily on where you are now, and not on how you got there. What, version 2.0 of the product sucks now? Screw it, “1 star”, boycott this; fire the team responsible! Never mind the fact that version 1.0 was a rickety mess, all of those guys left, and anyone would be lucky to get the thing working at all, much less in a way that does not introduce any new bugs.

One of the big benefits of open-source is having a chance to understand what’s going on. Would as many people invest in a product, despite promising behavior in version 1.0, if they could see a critical review of how that version was actually implemented? Ideally, we can see not only how a product appears to work, but how well-engineered it is according to experts.




What you say is indeed important, but I wouldn't say the work before that is not "real work". It's just different types of work. Ask any successful startup founder what their initial codebase was like and they would say--if they're honest--it was shitty.

There's a reason for this. The "real work" you mention comes into play only after the product has reached product market fit and needs scaling. Before that phase, it's all about struggling to make things work, and people don't have time to refactor everytime they need to make a small tweak and experiment. So naturally by the time the "real work" guys come in they will think "WTF the code is so shitty, whoever wrote this is an idiot". But they should know that they're there only because of that shitty code. If they were in that same situation (and rational) they would have done the same thing--write a consequentially shitty code.


Couldn't agree with it more. "Walking on water and developing software from a specification are easy if both are frozen".

Startups are the opposite of having clear requirements. There is a lot of uncertainty. In my opinion, it is usually orders of magnitude harder to figure out what to dot hat gets suers than building something "perfect" that nobody cares about


Early work is closer to a lottery than anything else. If you only look at winners you will end up with a distorted view, but right product, right time, right market is more about luck than skill. MVP is all about validation because there is no way to judge good vs great before you build it. So, the only way to improve odds is to just roll more dice.


You can still write great code (meaning: clean non-leaky abstractions, proper separation of concerns, functional testing, clearly organized, and unintuitive bits documented) without having market validation, and I'd suggest that doing so will actually make it a lot easier for you to shift the pieces of the project, remove them, rewrite them, extend them, etc. as you start to hone in on the marketplace fit, and doubly so as you need to start growing your development team and need to on-board them effectively so everyone can be productive without just rapidly accruing compound interest on the technical debt that's already there.

It doesn't take that much longer to write great code when it's just part of the first-principles of your engineering process. You just do it as you go, but best-practices need to be scaffolded and modelled for everyone beforehand to give them a solid starting point for what "minimally viable" actually looks like. A single well-placed sentence that describes something necessary, but non-idiomatic or non-obvious, that takes 10sec to write can save you hours or days of tracing and debugging at that crucial moment right before you're onboarding a pilot customer or pitching investors.

Very often the bar of "minimally" is set way, way too low and the trope of that just being "startup life" is usually just laying problems at the feet of engineering that don't belong there by justifying, sometimes horrifying, technical debt which are really process and management failures, and later eventually scapegoating engineering for it.


I agree with you in theory, but from experience it is extremely hard to achieve when you're actually in that position.

Unlike what you say, it does take much longer to write great code when you're trying new things every day. Every time you try something new, by definition you're building something that you hadn't planned when you first started. Which means if you really wanted to be perfect you would end up refactoring every day. This is not a small effort in total.

You could say you should have prepared for that scenario from the beginning, but I don't think it's a good idea to believe that you can predict the future especially when you're building something that hasn't existed before.


In my experience, when you're in the "invention phase" is when it's most critical and beneficial to have and demand more discipline around your output artifacts. Precisely because keeping context and content around your fast iterative decision-making and implementation phases is critical to avoid wasting time on duplicated effort or needlessly repeating mistakes over and over again.

Literally every situation I've ever seen in 20 years across all manner of marketplaces and every imaginable stage of development has had the immediate faux-gains made from ignoring spending 10%-20% more time now (to produce something at least coherent, commented, and composable) ultimately overwhelmed by the soon-to-be-immediate costs of having to live with the consequences of the dubious "shortcuts" taken.

I agree with you that it's extremely hard to achieve maintaining that kind of discipline when under the gun, but that's a people management and expectations problem, not an engineering effort problem. I'm 100% certain that in any given window of time, no matter how crunchy that crunchtime is, that if my team spends 10-15 out of every 100 man-hours producing a higher-quality "minimally acceptable" artifact that they'll both materially accomplish more in that first 100 hours than if they'd spent all of it hacking together a barely working dumpster-fire in the same window of time, and that it will pay glaringly obvious dividends for their productivity in the next 100 hours after, and the next after that, and so on.

Lacking that discipline is neither a necessary condition nor a near or long-term benefit to a prototyping or product development cycle. It's purely a function of leadership, expectations management, and experience.

(I'm even giving the benefit of the doubt that what's being done "hasn't existed before", which is overwhelmingly not the case. More typically it's different people and different teams just painfully learning the same hard lessons in parallel, often because there's an assumption/delusion what's being done is a legitimate pioneering effort instead of the probable reality that it's really a minor variation on something else that already has well-established successful models available to learn from and mimic.)


> I'm 100% certain that in any given window of time, no matter how crunchy that crunchtime is, that if my team spends 10-15 out of every 100 man-hours producing a higher-quality "minimally acceptable" artifact that they'll both materially accomplish more in that first 100 hours than if they'd spent all of it hacking together a barely working dumpster-fire in the same window of time, and that it will pay glaringly obvious dividends for their productivity in the next 100 hours after, and the next after that, and so on

The problems is at the very beginning, all (or most) of that code will be thrown away because you really don't know what works. How many cycles of wasted extra 20% effort would it take for you to change your mind?


You missed the point. In that same 100 hours, spending 15-20 hours applying best-practices has universally resulted in actually getting more out of that 100 block. The 100 hours spent on the spaghetti mess has always produced less, provided fewer insights, and often exactly nothing that can be soundly iterated on.

This is because the friction and pain of technical debt doesn't wait until the end of the 100 hours to create problems. It's biting you every second of every minute of every hour and dragging down your productivity and velocity as you're just plowing forward soaking the noodles.


It really depends on what one's standard of "shitty code" is, but in my case my shitty code does perfectly fine until it reaches significant scale (as in millions of users). My thought process is: when I reach millions of users i better hire someone to help me with coding anyway and it's not too late to take care of it then.

On the other hand, I have also made mistakes of over-engineering to make the code maintainable. What happens then is I can't really detach myself from the original idea no matter how much evidence there is to show me that it's not working, since I've invested so much in the technology. In that case the time I spent to make the code maintainable becomes a mental liability rather than an asset.

I don't doubt you are a great developer and have great experience, and I don't doubt that spending 15-20 hours out of 100 hours when you've reached product market fit is something you should do, but I can say from my own experience and experiences from other successful/failed entrepreneurs I've met that it's absolutely a bad idea to do so when you're just 1-3 guys bootstrapping. In the beginning the only thing that matters is getting people to actually use your product, so you should spend 100% of your resources on getting there, not on optimizing maintainability at all.

If you have actually been in that situation (a founder who's built his own product from scratch by bootstrapping) and still think it's best to spend 15-20 hours out of 100 hours working on maintainability, please let me know. I personally don't know any founder who thinks that way.


I'm also not sure why we're narrowing down that 15-20 hours focused entirely on maintainability.

It can also be 15-20 hours making sure you're following language/tool idioms and conventions so that you can iterate quickly instead of fighting against yourself and your tools being used incorrectly.

It can be about making sure you aren't making leaky abstractions so that when you suddenly realize you need to make big or small changes to address market needs you're able to do it quickly and efficiently rather than having to rewrite a big brittle pile of al dente noodles with every shift and change you're presented with.

It can be about making sure that the little idiosyncratic things you had to do have some explanation around them so when your fellow boot-strapper has to come in and make a change or understand something you worked on, but you're home sick as a dog because your kid sneezed straight down your throat, your fellow boot-strapper can understand what you did, why you did it, and can carry forward with that change without depending on your availability holding everything up, and because you didn't make a bunch of leaky abstractions on top of it all they can make that change without breaking 99% of the rest of the prototype/product.

It can be about making sure a sufficient amount of what you're doing is done so with composability in mind, so that when you figure out that thing you did that you ended up throwing away, but now turns out would be supremely useful to meet current demands, both still exists someplace and can actually be used without having to spend 10x as long rewriting it.

Having great scaffolding, best-practices models, and good process makes everything go faster... not slower. Learning how to properly use your VCS is an enabler, not a drain, but it takes time away from just hacking away on something. Setting up a CI pipeline is an enabler, not a drain, but it takes time away from just hacking away on something. Building scaffolding tools that can just generate for you the first 20% of a project template are an enabler, not a drain, but it takes time away from just hacking away on something. Learning your tracing and debugging tools is a massive enabler when trying to understand and prototype something, but it takes time away from just hacking away on something.

I should add the caveat that all those things are enablers... only if you do them well... otherwise they're a giant drag because they're hurting you, not helping you, and they become a part of your technical debt.

Do you spend literally no time on any of that because it takes time away from hacking? Do you think having none of those enablers in-place is helping you all work faster?


Writing bad code is often the first step in writing novel good code. It's the 'does this library do what it says it does' kind of exploration. Can I do this in HTML 5?

First principles works fine on CRUD apps, less so new concepts. If you going to toss 90% of your code as useless then an extra 20% is simply wasted time. I am going to try the same thing in 5 library's to see which one I like directly means tossing 80% of that effort away.


I completely disagree. It's much less useful for "CRUD apps" because "CRUD apps" are an obvious and known commodity. New and novel concepts are much less broadly intuitive, and are precisely the kinds of things that demand extra rigor even if you're going to end up throwing it all away. None of it is wasted effort when you're considering, and admitting, that a big chunk of the work is exploration.

Keeping documented feature branches of all those versions you tried before landing on one is an absolutely critical result of any exploration process, as well as some aggregation of information and context in a central place that points to and describes those various versions and your final outcome. This helps you keep from repeating mistakes, it provides context to you and others in the future for why things are the way they are, and it provides an archive of workable variations that you can quickly migrate to in the event your "final answer" turns out to be wrong.

I haven't written a "CRUD app" since 2008, and since 2010 nearly all my work and the work of my teams has been in greenfield development (in some cases literally inventing new never before seen things). I see it time and time again. We move much, much slower as individuals and as a group when we take shortcuts under the auspices of time-pressure and don't take the time to be thorough even in prototyping.

So we spend the same 100 hours hacking away to create a dumpster filled with spaghetti as fast as possible or we spend 100 hours designing, building, implementing with intention, and every. single. time.... the latter process actually gets us further toward prototype, product, and/or market validation goals. The former process ensures that we have 0% of anything that is useful no matter what the results of the exercise are. The latter process ensures that we can legitimately iterate on what we've done based on the results of the exercise.

Iterating on a trash heap is extremely expensive, not a way to run faster.


If your working from an existing code base then it's not green field development. It literally means development that lacks constraints imposed by prior work. For example sticking with an exiting language, OS, compiler, etc. is a constraint.

So, clearly we are talking about something different.


I can't tell what you're actually replying to in my comment.

I'm talking specifically about using a different language, a different OS, a different deployment pipeline, new testing methods, new security constructs, new infrastructure orchestration methods, entirely new architectural paradigms, etc. etc. etc.

In fact using that as an example. Say it's a new programming language. You will absolutely waste more time dealing with the pain of doing things epically wrong, fighting with your tools, doing really crappy kinds of log-print debugging, etc. than you would taking 10-20% of your time up front learning some normal idioms and conventions for the language and learning to use richer tracing and debug tools to find problems and to instrument your prototypes.

Just because you sat down and started coding and debugging without having any idea what you were doing, but at least you were "doing", emphatically does not mean you are getting to a goal of minimally viable test, or prototype, or product faster. It just means you started typing sooner.


I still think we are talking about different things. If you want to know how much battery charge is lost when a cellphone get's a GPS lock that's a very focused requirement. Trying to also make your test code useful in the long term is vast and unnecessary constraint. It's the stage where you want to understand the problem.

"Wait, I need a Mac to write iOS code, ouch."

Ideally you should be producing far more research than code. Your talking logging, and I am talking the "H" part of "Hello, world."

PS: If you can get away without writing code so much the better.


This isn't just in code these days. I'd call it the Ikea phenomenon. Things that look functional and act functional in a constrained set of circumstances, and are deemed as good enough. The cost is often proportionally low, too. (ever tried to move a fully loaded Ikea dresser?)

But, as you've pointed out, the real work is in maintenance, durability, and quality, not veneers.


Arbitrarily relocatable furniture tends to be made of steel and weigh 300 lbs. I think the extended metaphor works for software as well.


That's a great metaphor...


There are definitely organizations where code that works but doesn’t appropriately account for those uncontrolled scenarios will earn you nothing more than a bad reputation. I’ve seen this happen, and I’ve seen their name slowly drift to the bottom of the list when it comes time for a raise or promotion.

I’m generally surprised at how little interest VCs have in this, though. I think a sizable percentage of their misses could’ve been avoided altogether if they had just audited their prototype, or whatever they’re claiming qualifies as their MVP.

People generally don't understand how many different scenarios software needs to account for, and how quickly those inputs can exponentiate into a very big and complex task. Creating the appearance of functional software takes a very small fraction of the time and expertise that it takes to actually make something production-ready, but I don’t think a lot of non-developers understand just how big that gap is.


Software VC's don't have to care about code being crap that only demonstrates well in a particular set of circumstances for exactly the same reason that mining VC's don't have to care that the same fraction of gold is found in every cubic meter of the company's mining site as in the handful of lucky drill samples.

Those VC's can pump and dump: promote the stock stock, then dump it before it dives.

The people that have to care are the company officers who want to grow a successful business. If those people are cynical, then the venture is probably doomed.


"version 1.0 was a rickety mess, all of those guys left, and anyone would be lucky to get the thing working at all, much less in a way that does not introduce any new bugs."

No user cares about any of this. Fix it. Don't try to rationalize shipping garbage.


> The “real work” is in figuring out how to move forward from something like that, and lots of stress comes from trying to do so without breaking anything.

I'm a lot more impressed with the evolution of the human species than how doctors have been able to maintain those running systems.

You want to design a human from the ground up using proper infrastructure that doesn't have ailments or diseases go ahead. you'll be lucky if you end up with a fruit fly.


The human body is the result of 1,000,000,000,000,000,000,000+ organisms dying before or after reproducing. That's evolution. It has absolutely nothing to do with the "maintenance" that doctors perform on the human body, especially when doctors are loathed to lose even a single organism.

I'm not even sure why I responded to your comment, since it was so random and unrelated to the rest of the thread.


If you think it's so unrelated, then maybe you didn't understand what he was trying to say?

Doctors are necessary and important and deserve to be paid well, since they deal with life. But that can be done by anyone really. I'm not saying any idiot can become a doctor, it's that most doctors do what any other doctor can do.

I'm not interested in those things either. I would rather work on something that can change how people live their lives--which one could arguably say is close to evolution.


It means you didn't understand my comment.

>The human body is the result of 1,000,000,000,000,000,000,000+ organisms dying before or after reproducing.

I guess I have to spell it out. "Bad, unmaintainable code" is the result of 1,000,000,000,000,000,000,000+ random requests being shoehorned in without proper design of the whole thing, but just on the basis of whether it still works or meets some last-minute requirement. I hope this makes my analogy clearer. (I realize the number is an exaggeration, I am referring to the process involved in "evolving" code, rather than "properly designing" it.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: