Hacker News new | past | comments | ask | show | jobs | submit login
Famous Laws of Software Development (2017) (timsommer.be)
450 points by pplonski86 on Feb 25, 2019 | hide | past | favorite | 195 comments



Regarding Conway's Law:

We have found that by changing our software/system architecture we have also inadvertently changed our organisation structure.

- Inverse Conway Law or just Roy's Law ;-)

Before we had four cross functional teams, working on a single application, everyone felt responsible for it, worked overtime to fix bugs etc, we had good communication between the teams.

But after we switched to microservices the teams became responsible for just a part of the system, their microservice(s). Whenever we had an outage, one team was left to fix it, the others just went home. They stopped talking to each other because they didn't share any code, no issues... they stopped having lunch together, some things got way worse in the organisation, all sparked by a 'simple' architectural change, moving to microservices.


That reminds me of a place I used to work at, where initially we had DBAs embedded in the teams. They switched that and the DBAs were all grouped together and all hell broke loose. They were always have meetings, throwing out emails about they were dictating this and that, and had very little direct communications with the teams they were supposed to be supporting.

I ended up leaving during the peak of all of this, but in an exit-interview, a director asked me about the problems this was causing.


> They stopped talking to each other because they didn't share any code, no issues... they stopped having lunch together

Was that accompanied by any growth in company size? I've found that this happens when a group grows past about 15 people even if the structure doesn't change.


This outcome could be considered a feature of microservices: by abstracting the functionality into more tightly-contained units, failures are more isolated.

Sounds like the organization needs to do other things to keep people from getting siloed, though that gets increasingly difficult at scale. Well-defined SLAs (along with monitoring and reporting of those SLAs) are also necessary so that microservice failures can be understood in the right context.


This YC talk from Amazon's CTO on how they grew to a microservice model and team structure was really interesting: https://www.youtube.com/watch?v=adtuntQ8rh4


I have seen new systems/software implemented just to isolate/remove parts of an organization. Worse I have seen it done when the existing system/software was just fine.


> They stopped talking to each other because they didn't share any code, no issues... they stopped having lunch together, some things got way worse in the organisation, all sparked by a 'simple' architectural change, moving to microservices.

Honestly, this sounds like an improvement.


How is people not talking to each other an improvement?


Oh, it is definitely a problem from a social/cultural standpoint. But from the point of software architecture (and, therefore, development organisation), too much (or even any) communication between teams working on discreet, separate units can become detrimental.

It is perfectly fine that the people communicate, and even helping each other to improve tech skill should be encouraged; however, decisions about their respective products should be contained within each team, with clearly defined interfaces and usage documentation.


> decisions about their respective products should be contained within each team, with clearly defined interfaces and usage documentation.

In order to make those decisions and define the interfaces, you need to know a lot about how your software is going to be used. That will be much easier if you have good communication with the other teams and understand their goals and motivations.


I disagree. In my experience, direct coordination on interfaces tends to create unnecessary special cases (hey, can you add this field to your API, just for us?) which add complexity and make maintenance more difficult down the line.

The main advantage of distributed system, and particularly microservices, is the ability to have each system completely independent: individual components can be written in different languages, running on different platforms and use completely independent internal components. Basically, it is just like using an external library, component or service: the authors provide documentation and interfaces, and you should be able to expect it to behave as advertised.


> In my experience, direct coordination on interfaces tends to create unnecessary special cases (hey, can you add this field to your API, just for us?) which add complexity and make maintenance more difficult down the line.

If you just implement all requests directly, you're for sure going to end up with a horrible interface. You should approach API design the same way that UX/PM approaches UI and feature design: take the time to understand _why_ your partner teams/engineers are requesting certain changes and figure out the right interface to address their problems.


Oh absolutely, but direct communication between teams is not the right method for that. Which is why every product, no matter how "micro" a service, needs to have a dedicated product owner/manager who is responsible for defining functional requirements.

Edit: I just noticed that "PM" in parent comment. Basically, product managers are not just for UI and customer-facing products.


The idea that "functional requirements" can be decided independently from "technical architecture", as opposed to in interplay with each other, is exactly the opposite of what I've learned from good experiences with "agility", although some people seem to somehow take it that the opposite.

But yes, you can never just do what "the users" ask for. The best way to understand what they need is to be in conversation with them. Silo'ing everyone up isn't going to make the overall product -- which has to be composed of all these sub-components working well with each other -- any better.


> The idea that "functional requirements" can be decided independently from "technical architecture"

Oh it absolutely can; it's just that it usually is not a good idea. But I'm not talking about the process of reaching that decision, I'm talking about the responsibility to reach them. Functional and technical decisions are separate, but in most cases should be defined in conjunction.

> Silo'ing everyone up isn't going to make the overall product

This is true for certain types of product, and less so for others; you need to clearly understand what type of product you're building, and be ready to adapt as it changes (or you have a better understanding of it). In a nutshell, the more compartmentalised a product is, the isolation between the teams becomes more beneficial. Which brings us full circle back to Conway's Law.


> But I'm not talking about the process of reaching that decision, I'm talking about the responsibility to reach them

Of course you're talking about the process. Your claims that "direct communication between teams is not the right method for that," and "communication between teams working on discreet, separate units can become detrimental," for instance, are about process.

I don't think this conversation is going anywhere, but from my experience, lack of communication between teams working on discrete, separate units (that are expected to be composed to provide business value), can become detrimental. And that's about process.


Well, from a systems standpoint, it means a given "problem" is isolated to a single service, and is therefore not impacting the other services or interrupting the work of the other teams.

But culturally, it would be nice if people helped each other out from time to time...


That's only true if any given problem is isolated on a single service, because while you are making intra-service problems much cheaper and faster to fix, you are also making inter-service problems almost impossible to fix.


Good point. I've seen situations where nobody would take ownership of a bug and fix it - you just had teams pointing fingers at each other...


This is precisely a symptom of unclear ownership responsibilities.


No, it can be an issue of it being unclear in which component the bug actually is.


Which is a clear sign that your components are not sufficiently separate.


Um, no. The real world is not that simple. It would be nice if it were.

Or perhaps your statement is correct, but in the real world components are never sufficiently separate. So, while your statement may be correct by definition, it is not useful.


I think you really nailed it with this one.

I can imagine there being a normal distribution of 'separateness' of software and the rare top tail-end of the distribution gets it perfectly right, most are in the middle somewhere between 'service oriented architecture' and 'ball of mud' and some are just at plain ball of mud.


You are misreading my words.


Perhaps so. Would you care to explain? Your statement here doesn't give me much to go on.


Well I meant it quite literally: if it is not clear which component has the bug, the components are not separate enough. You also say:

> in the real world components are never sufficiently separate

But the separation of components is not an issue of "real world", it is a function of design and implementation. It is absolutely to the developers how independently the components will be implemented; if there is no way to test them in isolation, then they are not really separate components.

Take this website, and your browser, as an example. They are obviously connected, as you're using the latter to access the former, but they are completely independent: you can access other pages with the browser, and you can use other methods (other browsers, or curl, or Postman etc) to access this page. Each can be tested separately, and even when they are used in conjunction they don't directly depend on each other, but rather on a set of standards (HTTP, HTML, CSS etc) understood by both.


Yes, but then you find web pages that attempt to determine which kind of browser requested the page, and changes what is sent back in order to work around the broken-ness of specific browsers. Yes, it's supposed to be a nice clean interface specified by standards. But in the real world, as they say, all abstractions leak. "All" may be an overstatement, but the problem is real. We never separate things cleanly enough.


This statement is true, depending on how you define "enough".


Sure. Look, despite the impression I may have given, I'm not arguing for separating things badly. The cleaner the separation, the better. It really makes a difference.


Presumably if you don't like other people then it's an improvement. At a guess I'd say that covers about quarter of our industry.


Probably we can agree that a company will be more productive if the engineers are learning from each other and generating ideas together? So if you've got a team of engineers who don't like working with people, it's probably in the company's best interest to set up a structure that explicitly encourages more communication.


Dunbar's number. As humans, we only have room for so many relationships.


This depends very much on the team size. Having a team of 20 people communicate in the way you describe below is insane overkill. Having a team of 100 do so may save everyones' sanity.


Moving to microservices is anything but a simple change. Your experience is one example of why microservices are not automatically a good idea. Normally, the advice is that they might be a good fit if you have isolated teams to begin with, and for different reasons.


it's almost like Bezos intentionally issued the microservices mandate at Amazon to discourage Amazon's engineers from unionizing.


> “Before we had four cross functional teams, working on a single application, everyone felt responsible for it, worked overtime to fix bugs etc, we had good communication between the teams.”

This actually sounds very dysfunctional, but with the type of positive PR spin that product / executive management wants, basically anyone who believes “cross-functional” is anything more than a buzzword.

Would love to know what the engineers thought about working in that environment (which sounds like a monolithic, too-many-cooks, zero specialization situation likely negatively affecting career growth & routine skill building).


Aside from the working overtime part, what sounds dysfunctional?


Types of bugs or failures are not separated into specialized areas, rather it’s one “cross-functional” unit. It’s like building a monolith class in software instead of following basic principles like Single Responsibility Principle and organizing workflows according to independent specializations.

This part is often much worse than the overtime part, because it means you’re expected to sublimate your personal career goals in favor of whatever arbitrary thing needs done for the sake of the cross-functional unit.

When I hear someone describe communication between cross-functional team members as “good” or “effective,” then I know it’s a big lie, and most probably it’s a disaster of overcommunication where product managers or non-tech leadership have a stranglehold on decision making when really engineering should be autonomous in these cases exactly according to independent specialization.


Could less overtimes be considered a positive outcome?


Haha, this is so similar to what happened where I work in SF that I feel you’re a coworker of mine.


Hyrum's law is highly relevant to anyone who makes software libraries.

"With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody."

I.e. any internal implementation details that leak as behavior of the API become part of the API. Cf. Microsoft's famous "bug for bug" API compatibility through versions of Windows.

Http://www.hyrumslaw.com


They might become part of the API in a superficial sense, but if you broadcast clearly that undocumented behaviors are subject to change, then users can decide if they want to accept that risk and won’t have a valid complaint if they want the not-covered-by-the-contractual-API preserved or become surprised by a change.


> if you broadcast clearly that undocumented behaviors are subject to change, then users can decide if they want to accept that risk

That sounds nice in theory, but doesn't really work in practice. If you're building infra and a core piece of your company's product relies on these undocumented behaviors, you can't just change the behavior and shrug your shoulders when the whole product breaks. Similar if you're providing an external API to users/customers, you can't just break their stuff without worrying about it.


I'd add, if the API is meant to implement a protocol but doesn't implement it quite correctly, you may object to the misimplementation, but if your code has to work with the implementation, you have to adapt to their bug. It's not even a matter of undocumented behavior.

Experienced recently as a consumer of an API when letsencrypt made a breaking change to implement the protocol correctly. Broke my code which relied on their original incorrect implementation.


Isn't this the exact reason people came up with semantic versioning?


While you might not view their complaint as valid, what if the change you made cuts off service to the customer? And what if the service you provide is in the critical path for the customer, such as hosting, payments, or even power?

I can testify to this personally, having worked at a payments processor and accidentally broken integrations. The business, as it should have, had little tolerance for me changing a depended upon API, even though it was not documented


The struggle here is that it's not always clear what behavior, buggy or not, is intended behavior. Especially as the complexity of an API endpoint or method increases, for example with large input models or mutable system state at the time of request.


Too many devs do not read the spec/docs and relies on testing. If it works it's golden.


Of course, that doesn't mean they won't complain.


Obligatory xkcd "workflow": https://xkcd.com/1172/



It's very common for Conway's law to be regarded as some kind of warning, as if it's something to be "defended" against. It's not. Conway's law is the most basic and important principle to creating software at scale. A better way of stating Conway's law is that if you want to design a large, sophisticated system, the first step is to design the organization that will implement the system.

Organizations that are too isolated will tend to create monoliths. Organizations that are too connected and too flat will tend to create sprawling spaghetti systems. These two cases are not mutually exclusive. You can have sprawling spaghetti monoliths. This is also one of the dangers to having one team work on several microservices; those microservices will tend to intermingle in inappropriately complex ways. Boundaries are critical to system health, and boundaries can be tuned by organizing people. Don't worry about Conway's law, leverage it.


There is an error in the ninety-ninety rule, which should be stated as:

    The first 90% of the code takes the first 90% of the time. The remaining 10% takes the other 90% of the time.


A personal rule of thumb I derived from the ninety-ninety rule is this: "Before starting a project, ask yourself if you would still do it if you knew it would cost twice as much and take twice as long as you expect. Because it probably will."


I totally agree. I tend to phrase it a bit differently: "If you need to know how long it will take before deciding if it is important enough, then it is likely not important enough."

Another kind of corollary: "If the business will go under if we don't get this done by X, then we probably need a new business plan, not faster development".

These are rules of thumb and there are definite places where they don't hold, but I've found it genuinely useful to consider when the inevitable tough questions start to get asked.


Twice is a (reasonable) minimum.


I see the 90% rule as a recursive function: First we get 90% of the whole work in the first iteration, then 90% of the remaining code (now we are 99% complete), then 99.9% and so on.

The iteration is stopped when the software has enough features and an acceptable level of bugs to be considered complete. What complete is depends entirely on the field of the software. For a proof of concept software we can stop after the first iteration, but for a safety critical software we might need 3, 4, or even more itarions.


I like this, it rings true in my experience.


I've found that under-promising and over-delivering requires me to quadruple my best estimate.

Unfortunately, lots of us bid against people who over-promise. By the time the project is obviously behind schedule, it's too late, and the client can't switch to someone else.


To be honest from my experience usually it’s a factor of five to get to full completion.


Sounds like you're just rounding up from 99.999%.


True! I came up with the factor five by observation but it’s in line with the predictions



That's funnier. As stated in the post, it's really just the Pareto principle again.


Murphy's Law has electrical engineering roots. I have a fun anecdote.[0] My wife is electromechanical and I'm computer science so we would work on projects together since we make a good team. I remember in college I was working with my wife on one of her projects and we were using force transducers. The damn things kept breaking at the worst times so we kept calling it Murphy's Law. After a while we looked it up. Turns out Murphy was working with transducers when he coined the phrase [1]. So I have this little back pocket anecdote about the time I got to use Murphy's Law in the original context. Which I can bring out in times just like this.

[0] I think it is fun. Your milage may vary.

[1] https://en.wikipedia.org/wiki/Murphy%27s_law


Everyone always conveniently forgets Price's Law (derived from Lotka's Law) It states that 50% of work is done by the square root of the number of employees.

Interestingly, Price's law seems to indicate 10x developers exist because if you have 100 employees, then 10 of them do half of all the work.

This idea is particularly critical when it comes to things like layoffs. If they get scared and leave or they are let go for whatever reason, the business disproportionately suffers. Some economists believe that this brain drain has been a primary cause in the death spiral of some large companies.


> Interestingly, Price's law seems to indicate 10x developers exist because if you have 100 employees, then 10 of them do half of all the work.

Or 0.1x developers exist...


You have to completely abandon reality to get rid of the idea. If 60 of the remaining 90 devs did NOTHING, those 10 devs would still be 2.5x better than the rest.

To take things further, make a bell curve chart. Put 50% of the area under the top 10%. Now, divide up the rest however you'd like. The only way to make this happen is for a huge percent to not only contribute zero, but to be actively hurting development to an extreme degree.

I have never found a 100 person company where 60% or more of the company was contributing absolutely nothing. I have never seen a company where a large number of people were actively harming the company and the company survived.


It's extraordinarily difficult to quantify this. Anecdotally i believe I've worked with people who were net losses for the company, devs whose contributions would be better if they did nothing. And yet those same people often shine in specific areas, like "angular knowledge" or something like that.


> I have never seen a company where a large number of people were actively harming the company and the company survived.

"To survive" is a temporal measure. It's pretty common for companies to survive on a successful product (or group of products). The rest of the company was a shell and revenue sink for that line.


My understanding of your model must be inaccurate somehow. Here's what I think I'm hearing:

- the distribution of productivity of devs in an organization of N * N devs can be approximated as: N devs who are "Nx", and the rest of the devs are "1x" (Price's Law, assuming a binary distribution for simplicity)

- the value of "x" is constant for all sizes of organization (if it were relative "some are 0.1x" would be a change of units, not an abandonment of reality)

This would yield the extremely surprising result that the total dev production of an organization scales quadratically with the number of devs, so what am I misunderstanding?


It actually looks more like an exponential curve with 50% of the area under the curve fitting in the last few devs. If we normalized the "flat" side of the curve to be a 1x dev, the we probably have 80 1x devs, 5 2-3x devs, 5 4-6x devs and 10 8-9x devs.

Rather than quadratic scaling, we're dealing with scaling by root. This actually meshes very well with the "mythical man month"

If we almost double from 100 devs to 196 devs, we only go from 10 to 14 devs doing half the work.

We've already accepted that 10 devs were doing half the work of 100 devs. We've also accepted that those devs must be giving it their all. So, doubling the devs, but only getting 4 new people to fill the doubled top 50%. Either we have some new 20x devs or the actual amount of work hasn't increased at the same rate.

I would still say that is probably incorrect though. The "mythic man month" doesn't apply to total work done -- only to total useful work done. As the social complexities increase, the ratio of other work decreases, but the those top developers will still have to carry both increases (to at least some degree) in order to still be doing half the work.

I suspect that as the social overhead increases, you should see three interesting cases. Those who can deal with the social overhead more quickly, so they have more real work time to compensate for being slower at it (potentially bumping a 5x dev with better social strategies higher). You could see the opposite where a 10x dev simply loses all their time in meetings. You could also see where a 1x dev with better social strategies handles most of a 10x devs social workload so that dev can instead focus on coding (it's rare, but I've worked on teams with 1-2 devs who did little except keep the team productive by fending off the bureaucracy).


>I have never seen a company where a large number of people were actively harming the company and the company survived.

Sorta buried the lede there, eh?


The law seems to hold for successful companies and companies that violate the law seem to disappear.

If someone has found the Russell's Teapot of companies that strays so far into absurdity while still being true, then let them bring forth the proof.


My first real job was as a contractor working with middle management at Bristol-Myers-Squibb. What I saw there was easily that absurd, and that company still exists.


I have always liked Postel's law (and Jon -- what a great human being he was) but I no longer like it as I used to.

The reason it's a really great idea is that it says you should engineer in favor of resilience, which is an important form of robustness. And at the same time, "strict in what you send" means "don't cause trouble for others.

However there are cases where "fail early" is more likely to be the right thing. Here are a few:

1 - Backward compatibility can bite you in the leg. For example, USB Type C (which I love!) can support very high transfer rates but when it can't it will silently fall back. So you could have a 40 Gbps drive connected to a 40 Gbps port on a computer via a cable that only supports USB 2 speeds. It will "work" but maybe not as intended. Is this good, or should it have failed to work (or alerted the user to make a choice) so that the user can go find a better cable?

2 - DWIM is inherently unstable. For users that might not be bad (they can see the result and retry) or terrible ("crap, I didn't mean to destroy the whole filesystem").

I see these problems all the time in our own code base where someone generates some invalidly-formatted traffic which is interpreted one way by their code and a different way by someone else's. Our system is written in at least four languages. We'd be better off being more strict, but some of the languages (Python, Javascript) are liberal in both what they accept and generate.

This aphorism/law was written for the network back when we wrote all the protocol handlers by hand. Now we have so many structured tools and layered protocols this is much less necessary.


"The Harmful Consequences of the Robustness Principle" is a good read: https://tools.ietf.org/html/draft-iab-protocol-maintenance-0...


Being liberal in what you accept has turned out to be a security problem. This is especially so when this maxim is observed in a widely-deployed piece of software, as its permissiveness tends to become the de-facto standard.


I feel like Fonzie's Law would be a worthwhile inclusion: "The best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer."


I refuse to fall for that bait.


Remember that all is opinion. For what was said by the Cynic Monimus is manifest: and manifest too is the use of what was said, if a man receives what may be got out of it as far as it is true.


Zawinski's law of software envelopment:

Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.


Another of jwz's laws is:

Any social media company will expand until it behaves like a bank; receiving deposits and making loans to customers (not necessarily users).


The modern angle on this rule is that "Every program eventually adds text chat, and they're all incompatible with each other."


Isn't mail just another form of text chat?


Somebody took your idea quite literally: https://www.coi-dev.org/

(HN discussion: https://news.ycombinator.com/item?id=19216077 )


Yes. That makes it the more general form of the original law.


The incompatibility is the key feature that differentiates modern text chat from those inferior mail applications.


  Some people, when confronted with a problem,
  think “I know, I'll use regular expressions.”   
  Now they have two problems.
(Originally with "sed" instead of "regular expressions")


    Some programmers, when confronted with a problem, think 
    "I know, I'll solve it with threads!"
    have Now problems. two they


There are only two hard problems in distributed systems:

    2. Exactly-once delivery
    1. Guaranteed order of messages
    2. Exactly-once delivery
Source: https://twitter.com/mathiasverraes/status/632260618599403520


I like.

> There are only two problems in computer science, naming things, cache invalidation and off by one errors.

I've got to say modern languages with foreach() have been amazing (makes me feel old when I consider a 20 year old widely used language 'modern').


Don't forget Atwood's law:

"Any application that can be written in JavaScript, will eventually be written in JavaScript."


Or Greenspun's Tenth Rule of Programming:

    Any sufficiently complicated C or Fortran program contains an ad-hoc,
    informally-specified, bug-ridden, slow implementation of half of CommonLisp.


I encountered a literal version of this the other day.

I’ve been looking at some disused AI systems, which were all written in Lisp back in the day.

In an attempt to remain relevant, at one point in the early 2000s someone tried porting one of them to Java. By first writing a Lisp interpreter in early 2000s Java. So the system had all the old dynamic Lisp programs as giant strings, embedded in a fugly static class hierarchy.


A mutant variation of this: Greencodd's Rule: "Every sufficiently complex application/language/tool will either have to use a database or reinvent one the hard way." (from c2.com)


And then there is Virding's First Rule of Programming:

Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.


Variation 2: If you factor duplication heavily and go ultra-meta, you'll end up with Lisp or a clone of Lisp. However, only Lisps fans will be able to understand the code.


I suppose you mean that half manifests itself as Javascript.


aka JavaScript.


Or it will be compiled to it.


The brainfuck community is falling behind.


This is a list of rather, generic catch phrases. I think the article isn't worth the time, surprised to find it at the top of HN.


Yes, this doesn't strike me as good quality content at all. But it's a large list which lets everyone pick something and chip in which drives engagement.

If this sort of content is the sort that this community increasingly selects for then it is perhaps time to look for fresh pastures. ( I don't however know if this indicative of HN's current community or just an 'accident' - I'm sure there have always been examples of poor quality near the top at times).


I think not just this one, a few of the top ones are of similar vein at the moment. Seems a bit off-colour, as usually the quality of content was rather consistent.


I disagree. The article itself may not have much meat to it, but the discussion it has sparked is definitely worth the time to read.

There is a lot of back and forth in the comments about software design and workflow practices. I think this kind of discourse is extremely valuable.


They might be catch phrase, but that doesn't make them inaccurate.


There is an entire poster of funny 'laws of computing' that was created in 1980 by Kenneth Grooms. It's pretty amazing how many of these are completely relevant 40 years later...

It's hard to find the original piece of art, but my uncle had this hanging in his office for a long time, and now it's hanging in mine.

I transcribed it in a gist so I had access to them for copy/paste.

https://gist.github.com/sorahn/905f67acf00d6f2aa69e74a39de65...

(Those pictures were from an ebay auction before I got the actual piece)


> program complexity grows until it exceeds the capability of the programmer to maintain it.

... then it grows even faster.


Bonus points for green bar paper!!


Postel's law, "be conservative in what you send, be liberal in what you accept"is definitely not "a uniter"!

https://tools.ietf.org/html/draft-thomson-postel-was-wrong-0...


Yes, especially when you consider Hyrum's law.


Quick and dirty is rarely quick and always dirty.

(Don't know if it has a name)


You can have quick-and-dirty for initial release, but it's rarely practical from a maintenance perspective.

A related rule: Design software for maintenance, not initial roll-out, because maintenance is where most of the cost will likely be.

An exception may be a start-up where being first to market is of utmost importance.

Other rules:

Don't repeat yourself: factor out redundancy. However, redundancy is usually better than the wrong abstraction, which often happens because the future is harder to predict than most realize.

And Yagni: You Ain't Gonna Need It: Don't add features you don't yet need. However, make the design with an eye on likely needs. For example, if there's an 80% probability of a need for Feature X, make your code "friendly" to X if it's not much change versus no preparation. Maybe there's a more succinct way to say this.


First time hearing this one. I really like it.


Especially in the java world. It can no longer be considered quick by the time IDE boots up.


1. All software can be simplified. 2. All software has bugs. Therefore, all software can ultimately be simplified down to a single line that doesn't work.


Page author, if you read this: Fred Brooks last name has an s. (Brooks, not Brook.) It should be Brooks' law.


Wouldn't it be "Brooks's" rather than "Brooks'"?

From what I know, the "*s'" thing works mostly for plural nouns. For singular, it only applies to classical & religious names ending with "s" ("Jesus'", "Archimedes'" etc).

I am not an English native so I may be completely off. Feel free to rage :)



Brooks' ?


This is exactly what I am trying to establish to improve my bumpy English. My best guess is that the correct form is "Brooks's" because (1) "Brooks" is a singular noun ending with an "s" and (2) it is not a classic neither religious name. If you claim it should be "Brooks'" I am ok with this as long as you give me a sensible explanation.


There's not exactly a consensus these days on what is correct. Either is valid, but I generally prefer _Brooks' Law_ to _Brooks's Law_ since it looks more clean. Of course, Brook's Law is incorrect, as there is no "Brook"

Here's an example of the lack of consensus:

Either is acceptable: https://data.grammarbook.com/blog/apostrophes/apostrophes-wi... https://owl.purdue.edu/owl/general_writing/punctuation/apost...

Chicago vs AP style: https://apvschicago.com/2011/06/apostrophe-s-vs-apostrophe-f...

APA style suggests appending the extra 's': https://blog.apastyle.org/apastyle/2013/06/forming-possessiv...


So, the usual clusterf*k of opinions instead of a clear spec. People should be speaking SQL.

Thanks for the links. Plenty of educational value there!


> So, the usual clusterf*k of opinions instead of a clear spec. People should be speaking SQL.

Because that would be an improvement, or not much of a change?


Sorry, forgot to set the Sarcasm New Roman font again!


Honestly, it's more amusing for the ambiguity :)


Native English speaker, from England, and we were explicitly taught to use Brooks' rather than Brooks's.

However that didn't stop the Beatles from using "In an Octopus's Garden" as a song title. (Note that the song is about a single Octopus). I would suggest that it depends on whether you intend to explicitly repeat the 's' when speaking.

Plurals of words ending with an 's' are an occasional minefield. You sometimes hear people smugly insist that the plural of Octopus should be Octopi, only to have someone even more smugly point out that Octopus is from Greek, not Latin, and so it should be Octopodes. Meanwhile the rest of us just continue to use Octopuses....


In the US - I've always understood it to have something to do with pluralization, along the lines of:

* One river's fish.

* Jesus's fish.

* Many rivers exist.

* Many rivers' fish.

The name "Brooks" unfortunately fits both the second and fourth of these examples, making it even weirder.


I like Wiggin's Law (found in [My Heroku Values](https://gist.github.com/adamwiggins/5687294)): If it's hard, cut scope. I'm working on a compiler for my new language and sometimes I get caught up in the sheer amount of work involved in implementing a new language. I mean, I have to write a typechecker, a code generator, a runtime (including GC), a stdlib, etc. But instead of just getting overwhelmed, I'm trying to cut scope and just focus on getting a small part working. Even if the code is terrible, even if it's limited in functionality, I just need to get something working.


I'm not sure who they should be named after, but I'd like to suggest two more:

> Redundancy is bad, but dependencies are worse.

https://yosefk.com/blog/redundancy-vs-dependencies-which-is-...

> Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.

https://stackoverflow.com/questions/876089/who-wrote-this-pr...


______ is like violence, if it's not solving your problem, you're not using enough of it.

(I first heard that for XML, and since have heard it for others. Was very funny for XML though. I also know its not really a law.)


Communication?


ReRe's Law of Repetition and Redundancy [2] seems appropriate here:

  A programmer can accurately estimate the schedule for only the repeated and the redundant. Yet,

  A programmer's job is to automate the repeated and the redundant. Thus,

  A programmer delivering to an estimated or predictable schedule is...

  Not doing their job (or is redundant).
[2] https://news.ycombinator.com/item?id=12150889


Last week I was trying to remember a term for when a programmer designs a system so generic that it becomes a prototype of itself. For the life of me, I can't remember - anyone here on HN know?



Postel’s law is lately considered harmful, and Linus’ law has been disproven a lot of times (eg goto fail, but also in the linux kernel).


Biggest one that is missing from the list in my opinion is Vogels Law:

"Everything breaks, all the time" - Dr. Werner Vogels CTO Amazon.com


Or, alternatively, Norton's law: "Everything is broken." https://medium.com/message/everything-is-broken-81e5f33a24e1


Am I the only person here thinking that many of them are just anecdotes, or are deprecated?

> Given enough eyeballs, all bugs are shallow.

Just count of viewers doesn't help. The owners of these eyeballs need both motivation to look for these bugs, and expertise to find them.

> The power of computers per unit cost doubles every 24 month.

Slowed down years ago.

> Software gets slower faster than hardware gets faster.

It doesn't. If you'll benchmark new software on new PCs versus old software on old PCs processing same amount of data, you'll find out the new one is faster by orders of magnitude.

Input to screen latency might be 1-2 frames slower, because USB, GPU, HDMI, LCD indeed have more latency compared to COM ports, VGA, and CRT. But throughput is way better.

> Premature optimization is the root of all evil.

Was probably true while the power of computers doubled every 24 month. It doesn't any more.


Joy’s Law: most of the smartest people work for someone else.


If your company doesn't hire more than 50% of all developers in the world at random, then this is probably true.


I would say Postel's Law, "Be conservative in what you send, be liberal in what you accept," should be tempered a bit. Sometimes it makes sense to be a bit more liberal with what you send (to make sure that consumers can handle errors) and more strict with what you accept (to make sure that consumers aren't relying too much on undocumented behavior).

For example, if you have a service with near 100% uptime, any other service which relies on it may not be able to handle errors or unavailability. Introducing errors in a controlled way can help make the dependencies more reliable.

As another example, being liberal about what you accept can sometimes result in security flaws, since different systems might be interpreting a message differently. Rejecting malformed input can be a beautiful thing.


Postel's Law is about handling standard protocols.

If you control all the clients and servers using a protocol, it does not apply to you. You're better being as strict as possible.


I know what Postel’s law is about, the argument stands. Postel said that in 1989 and our thinking about protocols has changed a bit since then. If you’re implementing a standard protocol like HTTP or TLS, and you are liberal in what you accept, this can cause security problems or other unintended behavior. For example, consider a proxy or gateway that interprets a request differently from an authoritative server. Suppose a nonstandard request is handled differently by each component. Ideally, one of the responses is, "this request is malformed, reject it". If each component handles the same request differently but without rejecting the request, you are quite possibly in a world of hurt.

More concrete example: suppose that an incoming HTTP request contains CRLFLF. To the proxy, “liberal in what you accept” might mean that this is interpreted as CRLF CRLF, which is the end of the request header. To the authoritative server, perhaps the second LF is silently discarded and ignored. Voilà: instant cache poisoning attack.



I would add Greenspun's tenth rule (law)[1]:

Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

And Beneford's law of controversy [2], which I see around monorepo vs polyrepo, language choices, tabs vs spaces, etc:

Passion is inversely proportional to the amount of real information available.

[1] https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule [2] https://en.wikipedia.org/wiki/Gregory_Benford


Finagle's law: "Anything that can go wrong, will-at the worst possible moment."

https://en.wikipedia.org/wiki/Finagle%27s_law


Oh please, not the Knuth's "principle" again. Optimization is a skill, it's not evil. A skilled engineer can build sufficiently good systems without wasting much extra time on optimizations.


His full quote is a little less prone to abuse

> Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

Premature optimization isn't bad, premature micro optimization is bad. You should also be thinking about optimization that results in better architecture and architectural decisions that make it easier to optimize in the future.


Yeah, I get frustrated that few people actually post the full quote, because, with the context, it means something completely different to young ears.

The full quote makes me think: "You should identify the critical paths of your system early." The shortened quote makes me think: "Deal with performance later."

Pretty big difference in meaning.


Personally, I think it's more about balancing trade offs. You need to have some semblance of target performance needs. Bad architecture can be hard to overcome later.

Most decisions are small, but can lead to compounding effects. Personally, I think one should also avoid premature pessimissation as well. No one in their right mind would use bubble sort over quick sort, for instance (not saying quick sort is the best algorithm, but it's better than bubble sort). One pet peeve I have in C++ is when I see people initializing a std::string with a literal empty string instead of using the default constructor. The default constructor is usually a memset or initializing 3 pointers. Initializing with an empty literal involves a call to strlen, malloc and strcpy. I've yet to see a compiler optimize this. May not seem like a big deal, but considering one of the most frequently used data types is a string, it adds up a lot. Most of the applications I've worked on show std::string methods and constructors as hotspots when profiled (back office financial systems).

I agree one should avoid premature micro-optimization, but that you can also avoid premature pessimissation.


The problem with the Knuth quote is that apparently people have turned the word "premature" into a synonym for "up-front." No, that's not what it means. Knuth is not saying that you should code (shit-out) the whole system naively and then try to find out where the bottlenecks are, as this dumb post suggests. Anybody that knows about the Knuth programming contest story would know he would be the last to endorse this. Premature literally means spending time on optimizing individual things before their need becomes apparent. It doesn't mean you should delay optimizing a system design from day 1.


I think the issue is that people often conflate optimization with simplification. You can, and should, try to simplify the problem space early on in the project life cycle but don't waste your time on things like comparing runtime performance of string pattern matching algorithms.


DevOps Borat is a wonderful, if hardly understandable, account of related 'rules' and aphorisms. Sadly, it is no longer updated: https://twitter.com/devops_borat?lang=en

Some choice tweets:

Cloud is not ready for enterprise if is not integrate with single server running Active Directory.

Fact of useless: 90% of data is not exist 2 year ago. Fact of use: 90% of data is useless.

In devops we have best minds of generation are deal with flaky VPN client.

For increase revenue we are introduce paywall between dev team and ops team.


Kind of unrelated to the article, but does the Moore's law joke about the cat constant make any sense? The `-C` constant should be on both sides of the equation (since the formula is future computation _relative to_ current computation), and thus cancel out. As it stands, the equation doesn't make sense when 'Number of years' is zero, and is inconsistent between calculating twice in 2-year intervals and calculating once with a 4-year interval (as an example).


Wouldn't the following be an embrace of Conway's Law rather than a defense against it?

> It is much better, and more and more implemented as such, to deploy teams around a bounded context. Architectures such as microservices structure their teams around service boundaries rather than siloed technical architecture partitions.

> So, structure teams to look like your target architecture, and it will be easier to achieve it. That's how you defend against Conways's law.


Interesting read, although at this point the "Given enough eyeballs, all bugs are shallow" should be regardes as afallacy, not a law, because:

- noone reads open source code

- those who read do not understand it

- those who understand don't file bug reports.

- those who file bug reports file them for their own issues coming from misunderstanding/misapplication of the software, not actual bugs.


I expected to see Lehman's laws[0] in there too, but maybe they are not famous enough. Maybe they don't deserve to be, but I think they are relevant observations.

[0]: http://wiki.c2.com/?LehmansLaws


* A organisation's requirements for data processing is a function of the organisations data processing capabilities and always greater

* All software has bugs, no software is inefficient

* A programmer's work is never done


Moore’s law is dead!

Also, I think Murphy’s law should be removed, it’s less true than the other laws here.

I read a fantastic article many years ago in the Atlantic where the author was analyzing and deconstructing an airplane crash, and in it was a paragraph about how Murphy’s law is completely backwards, and in reality if things can go right, then they will. Things will almost always go right unless there’s no possible way they can, in other words only the extremely rare alignment of multiple mistakes causes catastrophes. Can’t remember if the author had a name for the alternative Murphy’s law, but I believe it, especially in software. We get away with crappy software and bugs & mistakes all over the place.


I think people interpret Murphy's law incorrectly most of the time.

We can extrapolate from "Anything bad that can, happen will happen", and get the statement: "If something can physically happen, given enough time, it will eventually happen."

I like to think its sort of a very tangential sister idea of the mediocrity theory.


I'm not sure I understand what you think is incorrect; your explanation seems to align with the common interpretation.

Here's the article I was thinking of. Totally worth the read, aside from discussion of Murphy's Law...

https://www.theatlantic.com/magazine/archive/1998/03/the-les...

"Keep in mind that it is also competitive, and that if one of its purposes is to make money, the other is to move the public through thin air cheaply and at high speed. Safety is never first, and it never will be, but for obvious reasons it is a necessary part of the venture. Risk is a part too, but on the everyday level of practical compromises and small decisions—the building blocks of this ambitious enterprise—the view of risk is usually obscured. The people involved do not consciously trade safety for money or convenience, but they inevitably make a lot of bad little choices. They get away with those choices because, as Perrow says, Murphy's Law is wrong—what can go wrong usually goes right. But then one day a few of the bad little choices come together, and circumstances take an airplane down. Who, then, is really to blame?"

Of course, regardless of which way you interpret Murphy's law, the law itself and this alternative are both hyperbolic exaggerations. The main question is more of which way of looking at it is more useful.

In terms of thinking about safety, it seems like both points of view have something important to say about why paying attention to unlikely events is critical.


I suppose what I generally mean is that most of the people that I've talked to only consider it within the scope of "what can go wrong", and seem to never consider the more general statement. I'm certainly not claiming to be the first person to think such a way, if that's the impression I gave off.

Murphy's law is a favorite of mine because it's the perfect driving board for conversations about infinite probabilities and aliens and simulation stuff.


I guess I still don't know exactly what the more general statement is you're referring to. Do you mean just that a non-zero probability of a single event happening equals 100% probability given a large enough sample of events (which may take a large amount of time)?

I feel like Murphy's law as stated captures that idea adequately. And it's certainly true if the event probability really is non-zero. Sometimes, though, we can calculate event probabilities that are apparently non-zero based on known information, but are zero in reality.

One example in my head is quantum tunneling. Maybe this is along the lines you're talking about? And this is the way my physics TA described it many years ago, but caveat I'm not a physicist and I suspect there are some problems with this analogy. He said you can calculate the probability of an atom spontaneously appearing on the other side of a solid wall, and you can calculate the same (less likely) probability of two atoms going together, therefore there is a non-zero probability that a human can teleport whole through the wall. The odds are too small to expect to ever see it, but on the other hand, with the amount of matter in the universe we should expect to see small scale examples somewhat often, and we don't. There may be unknown reasons that the probability of an event is zero.


It looks like we agree on all points, yes


It’s also in Understanding Human Error by Sidney Dekker when talking about normalization of deviance. Everything that can go right will go right and we’ll use that to justify deviance more and more.


As I interpret Murphy's Law, it's not so much about failure in actuality, but more about anticipating the failure, and designing your code/product/system for the worst case scenario.


> it's not so much about failure in actuality

Murphy himself was unhappy abut the common interpretation of his law, which is negative rather than cautionary, implying a vindictiveness to exist in inanimate objects and the laws of chance.

> but more about anticipating the failure, and designing your code/product/system for the worst case scenario

Which was his intent. IIRC the phrase was coined while working on rocket sleds for a land speed record attempt. He was essentially trying to achieve "no matter what happens we want to maximise the chance of the pilot being alive afterwards, if some of the equipment survives too that is even better" and promoting a defensive always-fail-as-safely-as-possible engineering stance.


Exactly. You can't say "That won't happen" or "that's unlikely to happen". You have to have a way that handles it so that, even if it does happen, the guy on the sled doesn't die.


> Murphy himself was unhappy abut the common interpretation of his law, which is negative rather than cautionary

Are you sure about that? Murphy's actual statement was negative and not cautionary. He was criticizing a person, not saying something cautionary about the nature of the universe.

https://en.wikipedia.org/wiki/Murphy%27s_law#Association_wit...


does it work for you? The site only shows a beating gray circle.


As is often the case with these kinds of abuses of JavaScript, Firefox's Reader View solves the problem.


It seems to be a strange interaction between css and javascript. Using umatrix, I can read the text with both css and javascript disabled, but not with only javascript disabled. This is the first time I encounter this curious behavior.


It seems to be a SPA so it requires JS to load the content.


My law: convenience topples correctness. Evidence -programmers have been proven incapable of quoting Knuth's optimization principle correctly and in full


Could someone explain the last one, Norvig's Law:

"Any technology that surpasses 50% penetration will never double again (in any number of months)."


Double 50% is 100%, so assuming your market doesn't significantly grow, it's impossible to double your market share again.


Well, I wonder if it is true. I mean, theoretically, something could reach 50% drop down to 30% again and double to 60% again. Yes, it is unlikely, but I see no reason why it should not be possible:

http://www.norvig.com/norvigs-law.html

But most certainly that is not how the law was meant to be interpreted.


2x 50% penetration = 100% penetration.


"Any programmer can be replaced with a finite number of interns" - Janusz Filipiak, the biggest shareholder of Comarch, Poland.


My personal law from working in software consulting is "triple the estimate."


The Peter Principle is also referred to as Putt's Law (https://en.m.wikipedia.org/wiki/Putt%27s_Law_and_the_Success...) and phrased slightly differently.


Putt's Law seems totally different.

* Putt's Law: "Technology is dominated by two types of people, those who understand what they do not manage and those who manage what they do not understand."

* Putt's Corollary: "Every technical hierarchy, in time, develops a competence inversion." with incompetence being "flushed out of the lower levels" of a technocratic hierarchy, ensuring that technically competent people remain directly in charge of the actual technology while those without technical competence move into management.

In the Peter model, everyone gets (or tends to get) promoted until they reach a job they can't do, and they stay there. Thus everyone will (tend to) be incompetent. In Putt's model, the technically incompetent get promoted, and those at lower levels are competent.

Putt's does sound more like the way the world works...maybe. Peter's has always sounding convincing to me, yet the world evidently isn't so bad as that.


Missing: Goodhearts Law


Is there a law about the probability that an article about programming will reference XKCD?


A partner to the Peter Principle, particularly with respect to managers, is the Dunning-Kruger effect: "In the field of psychology, the Dunning–Kruger effect is a cognitive bias in which people of low ability have illusory superiority and mistakenly assess their cognitive ability as greater than it is."


[flagged]


That loaded extremely slowly on mobile admittedly, but I do agree that you're being overly harsh. Why assume he authored the site himself? Is it a requirement that someone doing web programming has to author their own blog site? It feels like a form of gatekeeping to be honest.


HN’s law 1: anything you didn’t build seems worse to you.

HN’s law 2: you always become a parody of yourself.


Oh no, the problem is they DO know how to program; a simple web page (or off the shelf CMS or blog software) is too easy.

Anyway, this was made with Ghost according to the generator tag; send https://ghost.org/ a ping that you're willing to help them improve their software.


100% agree. This website looks terrible. You can't be a self-declared "web developer", contemplate poetically about meta-programming, and publish this amateurish thing. Fix the #anchors to start with?


The loading graphic may be more to prevent a Flash of Unstyled Content (FOUC) than to mask loading times:

https://en.m.wikipedia.org/wiki/Flash_of_unstyled_content

Single page apps have quite a bit of control over rendering, however Google prefers that indexed markup be pre rendered from the server. Juggling the two competing priorities leads to byzantine technical issues.

Front end development for content sites can be complex.


No you're not being harsh. That site loads 1.3 mb of js, even though it doesn't have any dynamic content at all. The small header images for related articles you see along the right side are loaded as full hd images (probably the fault of wordpress theme).

Looks like the author has a real good internet connection and just hacked together a site over a lunch. Web devs should really limit there internet connection to 100k while testing their sites.


>Web devs should really limit there internet connection to 100k while testing their sites.

Now that is a great idea.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: