Hacker News new | past | comments | ask | show | jobs | submit login
The economics of software correctness (drmaciver.com)
113 points by exupero on Oct 5, 2015 | hide | past | favorite | 53 comments



Everything in development is a cost-benefit trade off. Despite it's speed limitations, Ruby remains an excellent language for development because of its focus on developer efficiency. You can produce feature complete systems in a short period of time at the expense of slightly higher hosting costs, which is significantly cheaper than labor costs.

Monolith development is still much faster than microservice development even though microservice development is the better long term option by far.

It's all tradeoffs. I'm an architecture nut and for years I really wanted nothing more than to design the ultimate perfectly scalable and secure system but unless you're at a virtually competition free enterprise, like a telecom or an insurance company the time or budget to do so probably doesn't exist.

I've gone from "get it done" to "do it perfect" back to "get it done and avoid obvious problem starters". The reality is just that "get it done" wins the business case almost every single time.


I agree within reason. I believe there are certain things that must happen, even with a 'get it done' mentality or the subsequent gitrdone is not possible due to productivity issues.

Things like error detection and reporting in the system absolutely have to be useful. I've seen systems where that isn't the case, and it destroys productivity.

So I agree in general, but I think there are a few areas that you need to get right or you can't gitrdone effectively for very long.


So this is all true, however it is surprising how often in development you find people spending more time and effort getting the 'pragmatic' hack to work acceptably than it would have taken to do it right.

People often underestimate the difficulties of getting a hack to work and overestimate how long doing something correctly will take. Even with short time horizons.


> People often underestimate the difficulties of getting a hack to work and overestimate how long doing something correctly will take.

Yes. And sometimes one of those people is your boss.


Most of the times, I would say. Perhaps the biggest tragedy of Software Development today is how few managers actually do/have done enough software development to qualify managing those who do.


Part of the job as a software engineer is to inform your manager as to the tradeoffs and the correct solution. Especially if well paid.

Surely, there are bosses who won't listen. But of the ones I've worked for, nearly all would listen to a reasonable explanation of how best to achieve the goals, and would approve recommendations from below.

If you've got a track record of being right, that helps a lot :-)


I agree. And the managers I've worked with: I'm glad they were there so that I personally didn't have to navigate the company hierarchy. But I strongly feel that the managers who were in the past good software developers have a much better idea of how to be managers...those who weren't, I've noticed a misplaced desire to prove themselves, which has hurt me personally.


Management has two goals: 1) Make themselves look like they contribute value: Micro-manage projects, track time spent, etc. Even if there's nothing achieved, they can claim to be doing something and organizing meetings. 2) Swap out programmers like cogs in a machine. 3) There is no incentive to have good quality code because chances are everybody will be off the project in a year or so, so you take credit for the success and place the failure on the poor SOBs who have to maintain the system.


i think you might be misunderstanding the point of managers..


True, but he does paint an interesting, if not accurate picture of a significant proportion of the managers out there.


I personally hate working with people who put things in terms of 'right' and 'wrong', or 'correct' and 'incorrect'.

I honestly believe the key to software development is good decision making. If you're thinking in terms of right and wrong, correct and incorrect, then you're not making decisions, you're simply doing the right thing over and over again. Only it may not be the OPTIMAL thing, or the smart thing, in any specific instance.

For example, who here really believes proponents of unit tests have never painted themselves into a corner? How did that happen? By making poor decisions.

Stop thinking in terms of right and wrong. Seriously. Sometimes unit tests are a good decision, sometimes they're not. Sometimes that hack is a good decision, and sometimes it isn't.

You make a decision on a case by case basis, not because it's right.


> I honestly believe the key to software development is good decision making.

I agree, and the purpose of my comment was to point out that I have often observed people making decisions on a case by case basis, and repeatedly making bad decisions by underestimating how much effort the solution thought to be 'quicker' or 'easier' or 'minimal change' option is compared to the solution thought to be more 'radical' or 'unnecessary effort'.

You seem to have some specific bone to pick around testing. I'm not really interested in that. When I say 'right' or 'correct' in this context, it's generally about not being blinkered by the current state of your codebase. People get so used to the fact that their current codebase does things in a particular way, they often look at problems as situations where they have no choice but to force the problem into the frame of their code. Solutions where you solve the problem with something closer to what you would want to write if you didn't have to worry quite so much about the peculiarities of your current codebase generally scare people, but are often the better decision not just from some academic point of view, but literally from an execution, getting-things-done quickly point of view. 'correct' as I use it in this context is mainly a shorthand for that concept.

The whole point here is not to say that there's some secret that avoids you from having to make decisions, it's to say that many people in the real world are making a particular class of error in decision making. Signs that a particular decision might fall in that category include:

1. the main trade off being considered between the options is developer time

2. none of the developers actually think the believed-to-be-quicker-to-implement option is a good option apart from its supposed quickness to implement.

3. other options exist where a significant fraction of developers agree that apart from their supposed slowness to implement, they are good options.


I like the idea of finding a good decision in the context of a current situation. I don't like the idea that terms like 'correct' and 'incorrect' should not be used. For some problems, there exist exactly correct solutions (like adding integers). The lack of a well-founded or at least well-known correct solution is what forces you to look for an optimal solution made of trade-offs.


If you read back over the the thread it will quickly become apparent that your example is not relevant within the context of this conversation.


"The problem is not that we don’t know how to write correct software. The problem is that correct software is too expensive."

That's something I've been saying for a while. It's all about the economics of the situation, rather than the impossibility if it.

Naturally, there are also improvements here and there that decrease bugs without increasing efforts, and those are worth looking for.


But in a practical sense, the problem is that we don't know how to write correct software.


Actually, we do have some pretty good ideas about how to do it. That was covered in the article.


> Better monitoring is another. Code review processes. Static analysis. Improved communication. There are many more.

Did you mean this? Or NASA's process? Because I wouldn't characterize any of the above as "good ideas about how to [build correct software]".

Reviewing some of your other comments, I think we fundamentally disagree about how close we are to optimal software development practices. Discussing this in economic terms is like discussing the economic reasons that Columbus didn't go to the moon.


I don't think that's a fair analogy. At this point, I believe we have pretty good ideas that make it possible to write correct software. Most impressively, there has been progress on creating a completely verified C compiler named CompCert: http://compcert.inria.fr/. This really means "completely verified"---they formalized the semantics both of C and of assembly, and used an automated theorem to prove that the semantics of the generated assembly are the same as the semantics of the source code.

Of course, building (and verifying) CompCert took something like 10 years, so it's certainly not a feasible way to write software yet. Maybe a better analogy would be around the invention of the first airplane---we have a long way to go before everyone's flying in a jumbo jet :)


> I don't think that's a fair analogy. At this point, I believe we have pretty good ideas that make it possible to write correct software. Most impressively, there has been progress on creating a completely verified C compiler named CompCert: http://compcert.inria.fr/. This really means "completely verified"---they formalized the semantics both of C and of assembly, and used an automated theorem to prove that the semantics of the generated assembly are the same as the semantics of the source code.

Note that Compcert has had bugs found in it too. Compcert has a verified core, but it's not a fully verified piece of software.

It's still a lot closer to bug free than almost any other software out there.


Yes, I was referring to the verified parts. To the best of my knowledge, these are bug free -- I believe the bugs were found in an unverified portion of the front-end (which I think was subsequently verified). I think the technique works in principle; it's just very, very expensive.


Analogies are like shopping carts; you can only push them so far before they begin to make an annoying high pitched noise. Your analogy probably works better in the ways that you've explained, though.


We know how to do a lot of things well, but when all our tools are made by hobbyist hackers creating out of love of fun instead of love of sustainability (or worse, short term hacks for love of profit), we end up with the mess we have now.


A big point of the article is that what we have now is 'good enough' for most people, since they're not paying for more. If they were paying a bunch more, they'd get better stuff. But they don't want to. So the hand-wringing and finger pointing is useless: it's best to figure out how to get the most from what we do have.

If you really want to work on super high quality stuff, there are fields where that is valuable.


Well, our base tools are crap.

C is awful. Linux is awful. Docker "containers" are awful (i.e. fancy branding around cgroups; no actual 'container' at all unlike solaris zones which did everything properly 10 years ago plus Crossbow networking 5 years ago).

We keep using (and inventing) quick hacks promoted by braggadocious personalities instead of stable infrastructure or proper tooling.


The article is attempting to explain the economics of why that is so. It's a pretty important thing to understand.

Other pieces of the economic puzzle are things like network effects and lock in. I highly recommend this book: http://amzn.to/1FTa7ib


Thanks for the pointers. Looks fancy. I'll check it out.


The problem is better stated that we're not that interested in it. If we were interested in it, we'd know how.

I've released a couple of things that were tested to exhaustion, so far as anyone knows. One advantage of the event-driven approach is that you can do that.


It's mostly based on the skill and experience of the developer. Of course there is probably some asymptote in the quality and delivery time that is impossible to surpass, like the 3 minute mile. Human brains are not without limits and we didn't evolve to write software.


It's not at all about developer skill. It's about methods and process.

Ok, you don't want to have a bunch of total bozo developers, or the time to get anything useful done will stretch out to infinity, but still, it's not about just 'being good'.


In my own experience. It is about skills. But if you don't have the skills you need more methods and process. The less skilled is your team, the more processes you need.


I think that's going down the wrong path: you can get "pretty good" software if you get the best people and just let them work on it. But it'll still have bugs. You need more process and stuff like verifiable software and all that kind of overhead to really start getting close to bug free.


Well that's part of it. You need to have the experience to know which methods and processes give you the most value and quality for the time invested. There's many skill sets involved and knowing good processes is just one.


Joel Spolsky published a blog post in 2000 named "Things You Should Never Do" (http://www.joelonsoftware.com/articles/fog0000000069.html)

It's about why Microsoft with IE6 won the browser war against Netscape who made the single worst strategic mistake a software company can make by rewriting Netscape 6.0 throwing out all the code from Netscape 4.

Netscape was working with extremely buggy and convoluted code in the older version and trying to save the development community from the nightmare that is IE6, ended being late to the market with a superior product. Joel makes a very good observation that often people want to throw out old code because they think it's a mess, but the truth is counter intuitive that the old code contains vast amounts of knowledge.

A company can be first, best, or cheat and in this case while Netscape was trying to be best, Microsoft was first.

This is the reason iterative code development is best. Speed of iteration beats quality of iteration 9 out of 10 times. Boyd's Law of Iteration (http://blog.codinghorror.com/boyds-law-of-iteration/) The best software is software that released most often, not released the most correct.

If I was to release a browser in say 2008 to compete with the dominance of IE, what is the single most important feature I could put into that browser? I'd put a feature for forcing iteration, so that the browser can automatically update on the client finishing a development cycle rather than release the update preinstalled unable to remove on newly bought computers.


Funny thing is, Microsoft did almost the same thing with Windows NT. However, unlike Netscape, they had the resources to keep iterating their old junk while they worked on the totally new version.


Spolsky mentions that Microsoft was going to rewrite Word for Windows and Mac with the same code base, but someone decided because at the time Word Perfect was a better product, they should skip that idea.

http://blogs.msdn.com/b/rick_schaut/archive/2004/02/26/80193...


I think one of the other problems is that people value the economics of software correctness using their gut, rather than empirical analysis.

https://en.wikipedia.org/wiki/Hyperbolic_discounting

Everyone knows that bugs are problematic eventually, it just seems that they can't put that on a level playing field with the up front costs, be they in terms of time, features, or effectiveness.

As an example, if you asked Home Depot whether they were saving money with their self checkout machines, I'm sure the answer would be different before their data breach vs after. Even after being warned they simply couldn't properly discount the possibility of massive damage in the future when offered a short term benefit.


Hyperbolic discounting is rational when the availability of resources increases exponentially, as it often does for products that catch on.

A company that comes to market with a product that is useful but buggy will attract the attention of venture capitalists & other investors. It will receive user feedback from its existing user base. It will find it easier to hire top talent. It will be able to use collected data to make better products. All of these factors are in proportion to the company's size, which tends to make growth rates exponential.

It's pretty standard practice in the tech industry to bring a buggy, barely-working product to market; use interest in that to raise money; use money to hire engineers; and use the engineers to fix the bugs. You could even look at this as a net benefit to society, as long as existing customers would rather use the product in its buggy, incomplete state than go without it.


I see it continue long past the point where 'we needed to ship something to be able to eat next month', to the point where people handicap themselves for the next 5 years in a mature business to release a feature a month faster even though they've got 3 years of runway and ample revenue.

Also, hyperbolic discounting is explicitly not exponential in the way you might account for the future availability of resources (even at a very high exponential rate), it's valuing things on a different curve in the future than the present: would you rather have five dollars today or ten dollars next month, vs would you rather have five dollars a year from now or ten dollars a year and a month from now. People will say five dollars today, but ten dollars a year and a month from now, even though under rational analysis they should come out exactly the same.


> You have probably never written a significant piece of correct software.

So true.

When we were hiring for a senior engineer position a few months ago, one applicant said he wrote "bug free code."

I laughed, sent it around on slack. We all chuckled at it. The applicant did not get a call.


> If you want better software, make or find tools that reduce the effort of finding bugs. ... Better monitoring is another. Code review processes. Static analysis. Improved communication. There are many more.

I'm going to come out and suggest that even THESE are more expensive than most businesses need. I believe code review processes are insanely expensive for what they usually return (wrt bug costs).


Main takeaway: "Buggy software is not a moral failing."


What about reducing bugs through code reuse, in the form of libraries and frameworks? Because they have more users, more bugs have been found, reported and fixed. e.g. using standard libraries instead of writing your own.


code reuse is actually dangerous though. It's possible to use a library, have an expectation on how it will work, and then forget about that assumption, upgrade/fix something in the library, and proceed to break your assumption. It's a pretty common problem.

Often the time saved using a library is worth the small risk, but if we're talking about "how to write correct software" you'll want to be weary of code reuse.


That works only when the quality of the libraries and frameworks are high. When they are crap, you just spread the pain around and you've made things worse.


Most libraries found in the wild are kinda crap, because as an industry we have a massive problem where most of our stuff is built on free labour that people have performed in their spare time to scratch their own itch.

Using libraries is still generally a good idea, but its effect on software correctness is a bit of a coin toss.


Yes, libraries and frameworks vary in quality. My e.g, was for "standard libraries", meaning those that come with the language, though I didn't emphasise this.

Another problem with libraries in the wild is that as they add features, they add bugs. If they don't add features, they lose popularity and don't get used. If they are commercial, they are less popular, and get fewer bug reports (and fewer eyeballs checking for closed source). People don't want to pay for correctness.

I think you're right: correctness is pretty far down the priority list. Good enough is good enough.

BTW: static types have correctness benefits, but dynamically typed languages are very popular - and when static types are used, it's for performance and documentation. Languages using static types for correctness (e.g. ML family and haskell) are not mainstream.


Reading this made me wonder...do SpaceX and other private space flight programs have similar standards of rigor?


Probably not, which is why I expect spacex to put a person on Mars long before NASA does.


It's possible to write software with very few bugs if you're very experienced in the language/framework and software development in general, writing all the code yourself, and perfectly understand the requirements. Otherwise, good luck.


Sometimes, writing bug-free code isn't even sufficient; sometimes, one needs proof that it is bug-free.

Then, your favorite language/framework will not help you much. You'll need strong verification tools, and a language and frameworks that are compatible with it.


While I won't deny this point, I think the industry at large would benefit quite a lot if we were able to agree on how to write probably bug-free code, even if empirically.

Reality is that within a group of 10, you might as well get 12 or more prescriptions of how to write quality software. About half of those prescriptions will be useless and at least one will be actively harmful, but the team will have a hard time to reach a consensus and tell which is which.

Actually, the people most likely to tell which is the obviously harmful prescription learn pretty fast to keep their mouths shut, because they are more likely to alienate their peers than to convince them. So, everybody knows that thousand-lines-long functions are bad, but nobody can tell exactly why besides the bland, non-threatening "style" argument.


It's even better if you're working with others who all are "very experienced in the language/framework and software development in general, and perfectly understand the requirements" - at least if you all communicate well with each other. Code reviews and people to bounce ideas off of reduce bugs, they don't add them.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: