Hacker News new | past | comments | ask | show | jobs | submit login

You can still write great code (meaning: clean non-leaky abstractions, proper separation of concerns, functional testing, clearly organized, and unintuitive bits documented) without having market validation, and I'd suggest that doing so will actually make it a lot easier for you to shift the pieces of the project, remove them, rewrite them, extend them, etc. as you start to hone in on the marketplace fit, and doubly so as you need to start growing your development team and need to on-board them effectively so everyone can be productive without just rapidly accruing compound interest on the technical debt that's already there.

It doesn't take that much longer to write great code when it's just part of the first-principles of your engineering process. You just do it as you go, but best-practices need to be scaffolded and modelled for everyone beforehand to give them a solid starting point for what "minimally viable" actually looks like. A single well-placed sentence that describes something necessary, but non-idiomatic or non-obvious, that takes 10sec to write can save you hours or days of tracing and debugging at that crucial moment right before you're onboarding a pilot customer or pitching investors.

Very often the bar of "minimally" is set way, way too low and the trope of that just being "startup life" is usually just laying problems at the feet of engineering that don't belong there by justifying, sometimes horrifying, technical debt which are really process and management failures, and later eventually scapegoating engineering for it.




I agree with you in theory, but from experience it is extremely hard to achieve when you're actually in that position.

Unlike what you say, it does take much longer to write great code when you're trying new things every day. Every time you try something new, by definition you're building something that you hadn't planned when you first started. Which means if you really wanted to be perfect you would end up refactoring every day. This is not a small effort in total.

You could say you should have prepared for that scenario from the beginning, but I don't think it's a good idea to believe that you can predict the future especially when you're building something that hasn't existed before.


In my experience, when you're in the "invention phase" is when it's most critical and beneficial to have and demand more discipline around your output artifacts. Precisely because keeping context and content around your fast iterative decision-making and implementation phases is critical to avoid wasting time on duplicated effort or needlessly repeating mistakes over and over again.

Literally every situation I've ever seen in 20 years across all manner of marketplaces and every imaginable stage of development has had the immediate faux-gains made from ignoring spending 10%-20% more time now (to produce something at least coherent, commented, and composable) ultimately overwhelmed by the soon-to-be-immediate costs of having to live with the consequences of the dubious "shortcuts" taken.

I agree with you that it's extremely hard to achieve maintaining that kind of discipline when under the gun, but that's a people management and expectations problem, not an engineering effort problem. I'm 100% certain that in any given window of time, no matter how crunchy that crunchtime is, that if my team spends 10-15 out of every 100 man-hours producing a higher-quality "minimally acceptable" artifact that they'll both materially accomplish more in that first 100 hours than if they'd spent all of it hacking together a barely working dumpster-fire in the same window of time, and that it will pay glaringly obvious dividends for their productivity in the next 100 hours after, and the next after that, and so on.

Lacking that discipline is neither a necessary condition nor a near or long-term benefit to a prototyping or product development cycle. It's purely a function of leadership, expectations management, and experience.

(I'm even giving the benefit of the doubt that what's being done "hasn't existed before", which is overwhelmingly not the case. More typically it's different people and different teams just painfully learning the same hard lessons in parallel, often because there's an assumption/delusion what's being done is a legitimate pioneering effort instead of the probable reality that it's really a minor variation on something else that already has well-established successful models available to learn from and mimic.)


> I'm 100% certain that in any given window of time, no matter how crunchy that crunchtime is, that if my team spends 10-15 out of every 100 man-hours producing a higher-quality "minimally acceptable" artifact that they'll both materially accomplish more in that first 100 hours than if they'd spent all of it hacking together a barely working dumpster-fire in the same window of time, and that it will pay glaringly obvious dividends for their productivity in the next 100 hours after, and the next after that, and so on

The problems is at the very beginning, all (or most) of that code will be thrown away because you really don't know what works. How many cycles of wasted extra 20% effort would it take for you to change your mind?


You missed the point. In that same 100 hours, spending 15-20 hours applying best-practices has universally resulted in actually getting more out of that 100 block. The 100 hours spent on the spaghetti mess has always produced less, provided fewer insights, and often exactly nothing that can be soundly iterated on.

This is because the friction and pain of technical debt doesn't wait until the end of the 100 hours to create problems. It's biting you every second of every minute of every hour and dragging down your productivity and velocity as you're just plowing forward soaking the noodles.


It really depends on what one's standard of "shitty code" is, but in my case my shitty code does perfectly fine until it reaches significant scale (as in millions of users). My thought process is: when I reach millions of users i better hire someone to help me with coding anyway and it's not too late to take care of it then.

On the other hand, I have also made mistakes of over-engineering to make the code maintainable. What happens then is I can't really detach myself from the original idea no matter how much evidence there is to show me that it's not working, since I've invested so much in the technology. In that case the time I spent to make the code maintainable becomes a mental liability rather than an asset.

I don't doubt you are a great developer and have great experience, and I don't doubt that spending 15-20 hours out of 100 hours when you've reached product market fit is something you should do, but I can say from my own experience and experiences from other successful/failed entrepreneurs I've met that it's absolutely a bad idea to do so when you're just 1-3 guys bootstrapping. In the beginning the only thing that matters is getting people to actually use your product, so you should spend 100% of your resources on getting there, not on optimizing maintainability at all.

If you have actually been in that situation (a founder who's built his own product from scratch by bootstrapping) and still think it's best to spend 15-20 hours out of 100 hours working on maintainability, please let me know. I personally don't know any founder who thinks that way.


I'm also not sure why we're narrowing down that 15-20 hours focused entirely on maintainability.

It can also be 15-20 hours making sure you're following language/tool idioms and conventions so that you can iterate quickly instead of fighting against yourself and your tools being used incorrectly.

It can be about making sure you aren't making leaky abstractions so that when you suddenly realize you need to make big or small changes to address market needs you're able to do it quickly and efficiently rather than having to rewrite a big brittle pile of al dente noodles with every shift and change you're presented with.

It can be about making sure that the little idiosyncratic things you had to do have some explanation around them so when your fellow boot-strapper has to come in and make a change or understand something you worked on, but you're home sick as a dog because your kid sneezed straight down your throat, your fellow boot-strapper can understand what you did, why you did it, and can carry forward with that change without depending on your availability holding everything up, and because you didn't make a bunch of leaky abstractions on top of it all they can make that change without breaking 99% of the rest of the prototype/product.

It can be about making sure a sufficient amount of what you're doing is done so with composability in mind, so that when you figure out that thing you did that you ended up throwing away, but now turns out would be supremely useful to meet current demands, both still exists someplace and can actually be used without having to spend 10x as long rewriting it.

Having great scaffolding, best-practices models, and good process makes everything go faster... not slower. Learning how to properly use your VCS is an enabler, not a drain, but it takes time away from just hacking away on something. Setting up a CI pipeline is an enabler, not a drain, but it takes time away from just hacking away on something. Building scaffolding tools that can just generate for you the first 20% of a project template are an enabler, not a drain, but it takes time away from just hacking away on something. Learning your tracing and debugging tools is a massive enabler when trying to understand and prototype something, but it takes time away from just hacking away on something.

I should add the caveat that all those things are enablers... only if you do them well... otherwise they're a giant drag because they're hurting you, not helping you, and they become a part of your technical debt.

Do you spend literally no time on any of that because it takes time away from hacking? Do you think having none of those enablers in-place is helping you all work faster?


Writing bad code is often the first step in writing novel good code. It's the 'does this library do what it says it does' kind of exploration. Can I do this in HTML 5?

First principles works fine on CRUD apps, less so new concepts. If you going to toss 90% of your code as useless then an extra 20% is simply wasted time. I am going to try the same thing in 5 library's to see which one I like directly means tossing 80% of that effort away.


I completely disagree. It's much less useful for "CRUD apps" because "CRUD apps" are an obvious and known commodity. New and novel concepts are much less broadly intuitive, and are precisely the kinds of things that demand extra rigor even if you're going to end up throwing it all away. None of it is wasted effort when you're considering, and admitting, that a big chunk of the work is exploration.

Keeping documented feature branches of all those versions you tried before landing on one is an absolutely critical result of any exploration process, as well as some aggregation of information and context in a central place that points to and describes those various versions and your final outcome. This helps you keep from repeating mistakes, it provides context to you and others in the future for why things are the way they are, and it provides an archive of workable variations that you can quickly migrate to in the event your "final answer" turns out to be wrong.

I haven't written a "CRUD app" since 2008, and since 2010 nearly all my work and the work of my teams has been in greenfield development (in some cases literally inventing new never before seen things). I see it time and time again. We move much, much slower as individuals and as a group when we take shortcuts under the auspices of time-pressure and don't take the time to be thorough even in prototyping.

So we spend the same 100 hours hacking away to create a dumpster filled with spaghetti as fast as possible or we spend 100 hours designing, building, implementing with intention, and every. single. time.... the latter process actually gets us further toward prototype, product, and/or market validation goals. The former process ensures that we have 0% of anything that is useful no matter what the results of the exercise are. The latter process ensures that we can legitimately iterate on what we've done based on the results of the exercise.

Iterating on a trash heap is extremely expensive, not a way to run faster.


If your working from an existing code base then it's not green field development. It literally means development that lacks constraints imposed by prior work. For example sticking with an exiting language, OS, compiler, etc. is a constraint.

So, clearly we are talking about something different.


I can't tell what you're actually replying to in my comment.

I'm talking specifically about using a different language, a different OS, a different deployment pipeline, new testing methods, new security constructs, new infrastructure orchestration methods, entirely new architectural paradigms, etc. etc. etc.

In fact using that as an example. Say it's a new programming language. You will absolutely waste more time dealing with the pain of doing things epically wrong, fighting with your tools, doing really crappy kinds of log-print debugging, etc. than you would taking 10-20% of your time up front learning some normal idioms and conventions for the language and learning to use richer tracing and debug tools to find problems and to instrument your prototypes.

Just because you sat down and started coding and debugging without having any idea what you were doing, but at least you were "doing", emphatically does not mean you are getting to a goal of minimally viable test, or prototype, or product faster. It just means you started typing sooner.


I still think we are talking about different things. If you want to know how much battery charge is lost when a cellphone get's a GPS lock that's a very focused requirement. Trying to also make your test code useful in the long term is vast and unnecessary constraint. It's the stage where you want to understand the problem.

"Wait, I need a Mac to write iOS code, ouch."

Ideally you should be producing far more research than code. Your talking logging, and I am talking the "H" part of "Hello, world."

PS: If you can get away without writing code so much the better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: