Hacker News new | past | comments | ask | show | jobs | submit login
Normalization of Deviance (2015) (danluu.com)
447 points by akshaykarthik on Feb 14, 2023 | hide | past | favorite | 219 comments



“Let's say you notice that your company has a problem that I've heard people at most companies complain about: people get promoted for heroism and putting out fires, not for preventing fires.”

My first day at work at big-laser-company. Manufacturing engineer for a laser (then) so complex, it required a PhD to solve problems to get units out the door. The product was a ring laser. What that means is that the laser beam travels around in a race track pattern inside the laser before getting out, not a back-and-forth bouncing between two mirrors. Now this laser could be tuned to any wavelength by suitable setups and machinations, and once there, would “scan” a small amount about this wavelength, enabling scientists to study tiny spectral features in atoms and molecules with great precision. I knew all this shit. I was a Berkeley-trained physicist that built precision lasers out of scrap metal for my thesis. First day of work. I walk into the final test lab. The big laser was happily scanning away. The bright yellow needle-like output beam was permitted to hit the lab wall. As the laser scanned, the beam was MOVING on the wall. Whereupon, first day of work, I exclaimed the most obscene four words in manufacturing, for all to hear, “You can’t ship that!” (“Beam pointing instability” is detrimental to almost any laser application. It turns out that during scanning, an optical element was rotating, on a shaft, inside this laser. This mechanical motion caused beam motion.”) Well, I got an immediate reputation as a negative guy. (You can tell it’s deserved.) The solution was to retrofit 28 lasers in the field, mostly in Europe, with a component that cancelled the movement, on an expensive junket by a service guy. Who was hailed as a ”hero.”


One of the key questions in my due diligence practice is whether people are allowed to 'be negative' and to literally stop the line to avoid shipping a defective product.

This one question tends to separate out a very large fraction of companies that take unacceptable risks and allows the ones that don't to be justifiably proud of their attitude towards risk. These are not trivial things either, medical devices and software used in medical diagnosis, machine control and so on where an error can quite literally cost someone their life or a good chunk of their healthy life-span. Companies where people can not or won't speak up tend to have a lot of stuff that's wrong wiped under the carpet.

Kudos to you for speaking up, and irrespective of who got to be called a hero (that part isn't all that relevant to me) also kudos to your employer for acting on your input.


Jidoka[1] is a key feature of Toyota's manufacturing process that emphasizes detecting defects before they make it out the door and empowering workers to stop the line and get to the root of the problem. It's weird that this isn't a no-brainer for most orgs but I guess there's enough profit incentive in shipping faster at the cost of quality.

[1] https://en.wikipedia.org/wiki/Autonomation


Yes, it's worked really well for them: https://en.wikipedia.org/wiki/2009%E2%80%932011_Toyota_vehic....

I worked for a company that swallowed the (so-called) Toyota schtick hook, line, and sinker. About 14 years ago I tolerated some Toyota UK fossil coming in and berating me, in front of my entire team, for being a crap project manager, in spite of I was the most reliable and accurate product manager said (very successful and healthily growing) company had at the time. Seriously, still, fuck that guy with a nail-festooned cricket bat. I fucking shipped everything within the constraints I'd descrived at the beginning of the project, and it did great in the market. Anyone who doesn't like it is welcome to kiss my ass. But whatever.

Toyota or, more accurately, consultants who like to hawk the Toyota Production System (TPS), talk a good game, but the reality isn't always aligned with the ideals. Jidoka is evidently not a reality at Toyota, and they aren't much more enlightened than other orgs when it comes to pointing out problems, despite their A3 reports and multicoloured boards.

The Reckoning, by David Halberstam, makes it clear that "Toyota-like" practices aren't unique to Toyota amongst Japanese auto manufacturers. It also makes clear that these practices primarily exist to keep workers engaged and morale high (because, for those of you who've never worked on a production line [I have], in case there's any doubt in your minds, yes, it's boring as fuck).

The reason Toyota was much more successful than other Japanese auto makers in the second half of the 20th century is bugger all to do with their production process, and is instead the result of them being more aggressive and decisive in the wake of WWII: they simply opened a bigger factory sooner than their competitors and were therefore able to meet demand better. This gave them a trading advantage that lasted decades. The TPS didn't hinder their advantage, but it's absolutely disingenuous to claim it as the root cause.

Do NOT drink this koolaid about the TPS. I'm not saying there's nothing of value in it (I like genchi genbutsu, for example), but take it all with a pinch of salt. The value depends on who you are, who your team is, and how as a group you best operate. Fork-lifting business practices thoughtlessly from one organisation to another often doesn't work that well and TPS is no exception. It's no better than Agile cargo-culting but, because TPS is less mainstream, perhaps hasn't come under the same critical scrutiny.

Plus TPS's penchant for fault finding and negative culture overall just pisses people off and drags them down when they are (or should be) engaged in more creative problem solving. So something didn't work out: get over it, move on, and find another solution. Don't spend ages navel gazing about it. WTF? Seriously, if you think nitpicking everything and everybody makes you a good manager, you're an idiot and you should find another vocation. Fuck the fuck off. You're a tedious oxygen thief who's boring everyone.

Maybe it makes sense when you build the same thing over and over and over again, but we don't do that and we never did so it was always ridiculous to expect this to work well (and I say this as someone who, good faith, gave it a go, but the problem is that perhaps all the people pushing it at the time weren't acting in good faith).


As soon as something becomes a religion it loses most of its value.

To me the 'Toyota way' was more of an illustration than an exact guideline to follow and I've found this to be true for most of these things that tend to become a religion. Scrum, TDD etc all have this potential to become fodder for consultants that essentially sell a dream that they can not deliver on. But that doesn't mean there isn't a kernel of truth in there.


Form over substance. Toyota, and other Japanese companies, are living the substance of Lean and TPS. Most other companies implement the form and hope they magically get where Toyota is without any additional effort. Same goes for Agile and any other management "philosophy".


Yes, it's the essence of cargo culting. The tech world is also full of this stuff. The number of small companies that I've seen that implement the Spotify development team structure is pretty tragic.

People are always looking for silver bullets and the industry is rife with examples of this kind of thing.


Even better, "we're like Google" except they give you a blank stare where you ask about the 20% time, free food, compensation or where they hire from. Turns out it's all bootcamp/local wages no stock, no 20% time. But hey, they have beanbag chairs!


I'd like to apologise for the overly aggressive tone of this comment: I was drunk when I posted it. I have long since learned that drunk posting is a bad idea and yet, sometimes, I still am unable to resist that temptation. Anyway, this post isn't kind, and whilst I do have some issues with the TPS and think Toyota's history is often misrepresented, nobody needs to read a frothy mouthed rant about it.


First off, name a single auto company that hasn't had a recall in its history. I think it's a bit unfair to point at a recall and imply that the systems they employ are bad because of it.

Secondly, more to your point, I'm certainly not trying to defend the whole system or even imply that it's effective at accomplishing its stated goals. I'm merely saying that the concept of encouraging employees working with/creating/designing a product to point out flaws and making a point of digging into where defects are introduced is a good idea. I definitely can't speak to how well that philosophy is applied at Toyota but I think that's moot regardless.


I don't dispute this, but at the same time, the cars from Toyota and Honda during the 1980s were vastly higher quality than their American and European counterparts. This could have contributed to the desire to emulate TPS.


Could it be an example of... well not survivorship bias, but, TPS gave them a competitive advantage in the 80's, and since then, other car companies have adopted it or something similar and upped their game, normalizing it to the point where people don't care about it anymore?

That is, once something is normalized you don't notice it anymore. Like how people that saw the 'rona epidemic was under control (ish) thought the measures were no longer needed.


I do think that people have much higher expectations today. Getting an engine rebuilt is mostly no longer a thing. The US mostly caught up, then Europe.


That depends on the car brand. Mercedes' don't require engine rebuilds because the engines outlast the body they are in. Most cheaper brands don't require engine rebuilds because it isn't economical to do so, by the time the car clocks over 250K its book value is way higher than an engine rebuild.

But for classic sports cars they're fairly normal, those engines were not made to last forever, tend to be fairly high power for their displacement and the book value of the cars is high enough that rebuilding an engine can make sense.


Same for their JIT supply management. The attitude I have a hard time mitigating with manufacturing contractors years after the global supply chain have deteriorated.


Those things work well when you're the only one doing them to gain an advantage over your competitors. But as soon as everybody starts doing it that means that the whole chain will adapt and suddenly all that stock that allowed you to do JIT and offload the costs of keeping that stock onto your suppliers evaporates which takes all of the slack out of the system. Now everybody has to perform and that will work right up to the first crisis and then the whole house of cards comes tumbling down.

It is always important to know what the underlying assumptions of your strategic advantages are. Going 'countercurrent' can work, but then if the tide turns you need to be aware that your previous advantage is now a risk.


Toyotas DO break less. But their designs are stodgy, very slowly updated, and generally draw premium prices. So customers keep flashier makes in business.


There definitely is a lot of pretend, or alleged, authority floating around in a large organization. But when it comes to brass tacks, the number of folks who really can do something and have it accepted is quite small.


In my experience this very much depends on two thing: the industry and upper management.

Some industries have a really lax attitude and in quite a few cases upper management basically makes it impossible for people to speak up.


The flip side of "quality first" / stop the line is getting killed in market by a worse but faster or cheaper solution. You are locked in game theory with your regulators (if you have any) and your competitors.

It's disappointing (and career limiting) to do the "right" engineering and lose because you didn't correctly gauge the risk tolerance of the market.

I don't think there's any one answer for this.

My observation is that people will pay a premium for demonstrable physical safety features but privacy & security in software do not win markets.


This is mostly true but for things like medical devices, aerospace, industrial control and so on any corner cutting should be allowed.

I'm pretty sure that Philips right now has some thoughts on this as does Medtronic. Those two should have never happened and personally I'm all for liability of executives in such cases.


Should not be allowed, apologies. I always spot these errors only after the edit window has closed.


Heh, heh. They ignored my input and kept shipping. Then the customers began calling…

Meanwhile our major competitor cleverly placed their rotating element in such a position that the beam retraced itself through the rotating element, thereby substantially cancelling this effect.


Oh I totally misread that! From your story it seemed to me that they fixed it. Ok, that is less nice then.

I've played around with some - at the time - fairly high powered lasers and have extreme respect for them, the number of near accidents with those things was large enough that I learned to triple check everything and check for stray reflections at reduced power and the cleanliness of all optics before going all in. That saved me more times than I care to remember and is a nice reminder of how finicky a powerful beam of light can be. It doesn't take a whole lot to get a sizeable fraction of your beam ending up in places where you really don't want it to be. But they're lots of fun, even if they are dangerous :)


I've also learned this as "bring the solution", regardless of whether or not you find the problem.


Do you actually use the phrase "be negative" when you ask the question? I could see that distorting the responses by people who don't consider pulling the cord to stop the line "being negative."


Or … can the first officer order a goaround? Reading too much Admiral Cloudberg!


I once encountered a situation with a very expensive field laser. At one point, measurements started showing an increasing amount of offset.

Over a period of days, the error became increasingly, comically bad, until finally the system refused to boot.

A technician was called, and after hearing about the behaviour, the first request was that a photo of the laser light exit port be taken.

It was obvious why it wouldn’t boot: a mirror in the light path had fallen off.

The worst part was, the mirror had been held on by glue, and had been slowly slipping out of place. The hot climate was probably a factor.

They really should have had someone to say ‘you can’t ship that’ when the topic of glue to hold mirrors came up.


This exact problem happened with an optic in my lab in graduate school. For two years the senior grad student and postdoc blamed each other over the entire apparatus becoming misaligned every couple of days. (It was a really toxic environment.) Eventually, they both left, I was the only one there, and it still became misaligned. In one day I tracked it down to a prism from Thorlabs whose glue had gone bad positioned at the very beginning of the laser line- it was sliding in its mount.

I wish I had pushed more strongly about it. We spent probably a full person-day of work every week on that.


Oh man, this one hurts to read a little bit. It's crazy how people cooperating poorly can eat up that much working effort.


Reminds me of that giant pager outage ~ 20 years back. I remember one of the stories mentioned a woman who was going to leave her husband because he wasn't answering her page.


Well that's just abusive, really (to threaten to leave someone for not being in minute by minute contact with them/clear signs of an abuser power tripping).


Later in my career, I worked for a company whose principal technical strength was that they knew how to glue optics together in such a way that they NEVER moved, either thermally, or from shock. Detachment? The optic would break in an area besides the glue joint first. And the solution had little to do with the nature of which glue, which however was also optimized. These assemblies were flown in space, landed on the moon, and were in all U.S attack helicopters.


>They really should have had someone to say ‘you can’t ship that’ when the topic of glue to hold mirrors came up.

I work in product at a hardware company and have a lot of domain experience which came from spending years in the (literal) field. There's been many times where I write a product spec and the engineers are incredulous. "Really? It gets THAT hot?" or "Do we really need to provide a bonding/grounding lug on the case?"

It's not uncommon to find engineering teams with deep domain experience in one area, but completely lacking in others. Ignoring domain experience, there should have been rigorous product testing during design that would have weeded out the glue issue.


Interesting. I had an internship at a company that did inertial navigation, mostly for defence applications. I only knew of ring lasers for use in gyroscopes. (Send a laser around a loop wave guide/fiberoptic, and any translational acceleration cancels out going out and back, but any acceleration in rotational velocity in the plane of the ring/rotation vector perpendicular to the ring shows up as a Dopler shift. Tune the laser to have a standing wave, and rotational acceleration shifts the nodes of the standing wave around the ring.)

I had a colleague who got called up when a Trident missile MIRV bus fell off a forklift and he had to do simulations to tell the Navy if it was still good or needed to be brought back in for rework/recalibration. My understanding is that either the MRIV bus itself or its container has integral devices that record peak 3-axis acceleration for just such a scenario. I imagine they're as simple as a few precise weights on a few wires with precise failure strains, so you can bracket the peak acceleration by which wires broke and which survived.

On the one hand, it's great to have more accurate nukes, which allow lower yields, smaller stockpiles, and presumably smaller craters if everything goes sideways. On the other hand, "surgical" nukes result in it more likely that one side will use them and gamble that the other side won't massively retaliate.


You could look at it a different way: a more accurate nuke means a nuke that's targeted at military facilities and not sized 10x larger and aimed at "everything around that city over there".

If it was ever used, that work saves lives.


More importantly, I think more accurate nukes along with good satellite multispectral and signals intelligence means that top generals carrying out orders for nuclear first strikes can be more certain that they're signing their own death warrants. Hopefully this results in any leader ordering a nuclear first strike getting deposed by military coup rather than starting a nuclear war.


People are willing to die for causes all the time. The idea that a bunch of people would not take an action because it might kill them, particularly in the military, is pretty naive.

The history of nuclear brinksmanship is built on almost the exact opposite problem: people who are completely willing to sacrifice themselves for the cause and their government and who believe fully that the cost would be worth it and the decision would be correct.

At the expense of one's own life is one of the easiest sacrifices to make, and people who believe it are dangerous because they tend to volunteer a bunch of others to do so alongside them.

Nuclear command and control isn't about keeping any one person alive, it's very much about keeping the system functional so the deterrent is preserved. There's no way, within it, to actually ensure any level of personal survival - but the various advocates for first strikes at different points in history have never been concerned with that. They want their legacy, they want the problem solved "forever".


> More importantly, I think more accurate nukes along with good satellite multispectral and signals intelligence means that top generals carrying out orders for nuclear first strikes can be more certain that they're signing their own death warrants.

How would you do that? In the event of a nuclear war, my understanding is they'll mostly be flying around on special command and control planes. I don't think nuclear intercontinental SAMs are a thing. I'm not even sure if they could even be possible (wouldn't they need active guidance, which would be very hard on reentry).


Ahh, yes, the flaw in my optimism is that those doomsday planes do in fact have direct radio links to send the PAL codes and authenticated launch orders directly to the silos, submarines, and standby bombers.

The tier of generals just not senior enough to have a seat on the doomsday planes isn't in the emergency line of command to the nuclear weapons. So, regardless of how powerful a small coalition of those generals is, they cannot reliably prevent a nuclear launch. (They'd need a pre-existing conspiracy to quickly and efficiently turn their own air defence batteries against their own doomsday planes... at which point it seems very likely they'd just launch a coup long before a nuclear strike was ordered.)

So, I guess our last hope is that a small conspiracy of generals just under the doomsday plane tier would stage a coup once the nuclear sabre rattling reached a sufficient magnitude, before the nuclear first strike order is given.


> So, I guess our last hope is that a small conspiracy of generals just under the doomsday plane tier would stage a coup once the nuclear sabre rattling reached a sufficient magnitude, before the nuclear first strike order is given.

Even if that happened, it's just buying a little time. Some set of leaders/generals in the future will push the button (or build automated systems that do it for them).

Disarmament ain't gonna happen, and anything with a small chance of happening will happen, given a long enough period of time.


Please just don't do that. (Or missed irony?)


They proved you wrong, at great cost.

A more correct and polite version of your advice: "It will cost us a lot more to ship this as-is, and fix it later, than it will to delay shipment and fix it now. Is it too late to do that? Did we over-commit to shipping now?"

It wasn't your responsibility to come up with that version. It was your manager's responsibility. It was also their responsibility to find the necessary decision-makers and involve them directly. I would argue that this sort of work is the only real way that "management" can provide value in the first place.

Somehow, socially, it's incredibly common for people to value the inverse of that job. People assume it is "good work" for a manager to successfully ignore unpopular concerns, and push through to the end, no matter how inefficient that makes the journey.

That works out in the case that the shipping date was over committed, such that a delay would cost more than fixing it later. Even so, that entire situation would be avoided by refusing to over-commit shipping dates in the first place. That's the same responsibility applied earlier in time, so a manager that behaves the way I have described could factor out the entire problem at its source.

This is what the average person should learn about management. Even if it's not their job personally, there is a lot of leverage behind the decision a worker makes about what management behaviors to socially favor, and what behaviors to socially reject. That leverage is multiplied at every level up the hierarchy, making that the opinion of someone in a management role is very significant, and the opinion of someone in an executive role is crucial.

It's really difficult to be explicit about opinions. You can't really put them in your resume, but at the same time, an opinion on management style may be an executive's primary value contribution!


I have seen this far too often in engineering projects. The push is to get the product out the door and turn it over from R&D to production engineering. Never mind the final quality, that will be fixed in the field by another department with a different budget. We got a product shipped; our department's reputation and budget is intact.


I'm not in as prestigious a line of work as you are but I've found the exact same thing happens in my industry (web development, mostly).

Everyone wants to think of the cool ideas to make things work, but few people want to think of all the ways those ideas can break, fail, fail to be future-proof, be expensive, etc, etc whereas I relish in it; what's more satisfying than helping make a proposed or existing solution even better?

But the same applies to stuff outside of work, too. I find I'm quite negative about stuff in the exact same way and whilst it's fun to think "how could we fix this, how could we make it better", all people see it as is negativity and social pressure has made me start to rethink this approach in life. It's better to keep your mouth shut and let the fire start than to open your mouth and be negative, as per your analogy.

Hell it even applies to traditionalism; "we should put out that fire" "but that fire's always been there, that's the way it's always been" "but it's a fire!!!" "yeah, well it was here before you and we like it. That fire walked uphill both ways through the snow to get to school".


Hah, that is so me! Well, not for lasers, but the dynamics are the same. You point out issues and risks, are ignored and labelled negative. Then when those things cannot be ignored, people come crying to you for help. And when they hear the solutions, aka stop doing what you are doing wrong, they again label you as negative, up to the point of blaming for everything that went wrong.

You are not getting points for preventing fires, you get them for putting them out. Unfortunately, some folks seems to conclude that lighting fire up, just to put them out later, is a good and easy way to earn that "hero" reputation.


These stories ring so, so true. Once worked at a company whose infrastructure issues were so deep and festering that after fighting a fire, my boss told me, "If you go to the press about this, the client will sue us and everyone who works here will lose their jobs."


That's what I never understood about this story; did you guys have any suspicion it would dump radiation into the patient all at once, or was this like a concurrency bug


We weren't able to reliably install security daemons on a client's machine because the entire automation system didn't account for autoscaling. The issues were raised well before I joined and the project head legitimately didn't understand it as a problem that needed solving. The hosts were for a presidential candidate's webserver, and they noticed the webservers were missing security daemons days before the election.


> security daemons

AKA compliance checkbox crap?

If infrastructure is immutable (which makes it work even better for autoscaling), nothing new will get installed unless you build a new image. Export whatever data you require to ensure things you want to be running are running. Monitor entry and exit points.

What is left for the "security deamons" to do?


Maybe I'm missing a joke, but was your client HRC's campaign?(?!)


I think she should have gotten more hacker cred for running her own mail server.


Right? Of all things to self host.

Although if it was an IRC server then that would have been truly 1337.


Did HRC's campaign website get hacked? I know her mailserver was hacked, but that was when she was secretary of state, no?


That's not important and this ain't the place to ask otherwise they'd have told us.


> this ain't the place to ask

Am I double-whooshing here?

How is a Hacker News comment thread not the right place to respectfully ask questions in response to interesting comments. I know I'm not entitled to an answer, nor do I intend to start a flame war. Sheesh


There's nothing respectful about asking something that someone has very blatantly made a deliberate decision to leave out of their post, for completely understandable reasons.


On the contrary, I don’t think there’s anything respectful about assuming that the OP doesn’t have the agency to decide for themselves whether they want to respond to my question or not.

Additionally, I don’t have a lot of respect for anyone with the ego to assume they know what information was withheld “deliberate”ly or not in a discussion like this. How do you know that?! How do you not see that the OP can make this decision for themselves?!


Man, you're pushy. Yes, it was deliberate to exclude that information, and yes they were correct in their assumption.


> On the contrary, I don’t think there’s anything respectful about assuming that the OP doesn’t have the agency to decide for themselves whether they want to respond to my question or not.

If being respectful means anything it means reading their post closely and trying to understand what they were trying to convey. You can't talk about denying someone agency if you won't pay attention to what they're telling you.

> How do you know that?! How do you not see that the OP can make this decision for themselves?!

They did make that decision for themselves! It was clear from their post!


It is personal information that risks identifying them more than they already had at the time of posting. It took about two seconds to put everything together. I don't have a dog in this fight politically one way or the other, people don't need to identify themselves IRL here.


Who are you to decide what others are comfortable sharing on here. It is quite literally as simple as the person I replied to choosing not reply to my comment. Why is this issue a concern to you?

> I don't have a dog in this fight politically one way or the other

Neither do I.

> people don't need to identify themselves IRL here

I don't think they do either. Why are you assuming I "needed" this information?


Can you not? They're right.


Can I not what?

Why won’t either of you respond to my core argument: GGP does not need to respond to my comment if they’re not comfortable.

Me asking the question is not me demanding a response.


Lol


jeez thats a rough spot to be in. did you stick around to fix it or just get the hell out of dodge after that?


I did what I could with a handful of selenium scripts, then hit a road block because we didn't have ssh access to a chunk of the autoscaling hosts. Gave up after that, told the customer rep to tell them we can't do it, and gave my two week notice about a month later.


Ouch, that has to be rough to endure. I'm glad you seem to be in a better place now. Good on you for doing the right thing and getting the hell out of there when your options ran out.


are "security daemons" truly necessary though?

this whole thing sounds like a troll with enough convincing language to seem plausible


We could debate security daemons until our minds bleed, but.. man, I wish that all didn't happen.


Without the security daemons you risk a flux capacitor overload and that leads to it being exploitable via pointer wraparound.


in that case I'll throw on my wraparound shades..

and deal with it


Huh? Are you assuming that the parent comment is about someone programming a medical device?


I'm think it's a meta joke - Therac-25 (https://en.m.wikipedia.org/wiki/Therac-25) was a radiotherapy machine from the early 1980s and is (in)famous for having software failed and killed I think dozens of people. It's become a well known case study, but it's highly unlikely anyone on HN worked on it - I think that's the joke.


Related:

Normalization of Deviance (2015) - https://news.ycombinator.com/item?id=22144330 - Jan 2020 (43 comments)

Normalization of deviance in software: broken practices become standard (2015) - https://news.ycombinator.com/item?id=15835870 - Dec 2017 (27 comments)

How Completely Messed Up Practices Become Normal - https://news.ycombinator.com/item?id=10811822 - Dec 2015 (252 comments)

What We Can Learn From Aviation, Civil Engineering, Other Safety-critical Fields - https://news.ycombinator.com/item?id=10806063 - Dec 2015 (3 comments)


Extrapolating from these related threads, this article should have 2K+ comments the next year.


I like the bit about

   Let's look at how the first one of these, “pay attention to weak signals”,
   interacts with a single example, the “WTF WTF WTF” a new person gives off 
   when the join the company.
and kinda wonder if a company that prioritized not getting this reaction from new hires might find it is the most impactful thing they can do in terms of culture.


Just to give a different, concrete, perspective (and push a hot button HN issue), I've spent a fair amount of time working on extremely large web applications, and by far the #1 "WTF WTF WTF" thing that new hires say is "what do you mean you aren't using $TODAYS_HOT_JS_FRAMEWORK??"

Once you get away from "should we use version control" and into actually difficult software engineering questions, it's not clear how to balance a fresh perspective vs. an experienced (normalized? tainted?) view. I wish the article went into this more.

Like, how does the new hire (or anyone else) know the difference between "learning the complexity of the new system" and "internalizing/normalizing the deviance of this culture"?


> Like, how does the new hire (or anyone else) know the difference between "learning the complexity of the new system" and "internalizing/normalizing the deviance of this culture"?

If a new hire can't checkout, build, and test the software on the first day, then there is likely something either wrong with the hire or the infrastructure. A sufficiently old and arcane software system might take weeks before a new hire can make even a simple change, but that shouldn't impact those three items.


Along with this "how long to spin up the new hire" issue, one of my first (if not the first) questions when trying to help people improve their processes related to software is:

> If the user/client asks you to make a small but not trivial change, how long would it take to update and deploy the program?

I have had answers ranging from "A couple hours" to "A year" (yes, they were serious). Most were in the 1-3 month range, though, which is pretty bad for a small change. It also makes it apparent why a bunch of changes get batched together whether reasonable or not. If a single small change, single large change, or collection of variable sized changes all take a few months to happen, might as well batch them all up. It becomes the norm for the team. "Of course it takes 3 months to change the order of items in a menu. Why would it ever be faster than that?"


Not sure how speed of the change is related to batches. The “batch” is related to “due this week”, there never was a single item in this list. “Speed” is related to “due which week”, that depends mostly on the priority of the change, not on how easy it is.

Upd. And “change menu items order, fast” is a sign of a problem. We found Mac Cube in ski vacation rental home once. It ran MacOS 10.2 or something. All the menu items were in the places we expected them to be! You think carefully first, then you implement menu items order. Upper Apple -> About this Mac. We managed to break their network config in like 5 minutes!


By running the exercise with a small change the constraints and behavior of the rest of the process get emphasized. As an example, in one team their test process was entirely manual and took a month. They ran that entire test suite for every release, whether the release should have affected the requirements being tested or not. Why? Mostly because they didn't know what changes in the release would affect what requirements, but that was another problem. This did encourage larger batch sizes though because if you have that large cost in your process and the cost is fixed regardless of batch size (100 changes or 1, you spend a month testing) you might as well batch more changes into the release. Having more releases means you incur this large fixed cost more often and overall reduces your throughput.

And I don't think I understand your update to your comment or you don't understand the point of that example from mine. It was illustrating the submission topic: normalization of deviance. Sure, you should think about where things should be but if a customer comes in and says, "Swap these two items" and you can't provide a working version with that single change for months then things have gone off the rails somewhere. I put it in quotes to reflect a statement like what I have heard from those teams I worked with. To them a long effort for a trivial change is normal, when it should be considered deviance.

EDIT: effect->affect. Always trips me up.


> As an example, in one team their test process was entirely manual and took a month.

Ouch!


This is true, but there are cases where "months" makes sense. E.g. avionics, industrial, medical.


That can extend the time, yes. I had a caveat about that but apparently edited it out before submitting. But that only explains or justifies the delays if there is value added. I work in aerospace and these were avionics and related systems. The bad ones did not have quality as a reason, though the good ones did. The bad processes were swamped with manual testing (which was non-comprehensive and error prone) or really tedious CM related activities (which was manual and error prone).

You can do a lot with good test automation, even in avionics. That cuts down a ton of the time and usually improves quality.

I'll also note, don't take my "deployed" too literally. I used that term because so many people here are working on server-based applications where that makes sense. Think "out the door". The exercise can only go as far as the team/org's reach. Deployed for avionics would mean more like, "At the flight test team". After that, it's up to someone else to schedule it and get it returned with issues or fielded.

Going beyond the team's reach without including those people (and thus making them part of the team, after a fashion) is guess work and opens up the blame-game. "It's all flight test's fault it takes a year to get out to the customer." Well, it takes you 9 months to get it to flight test and them 3 months to get done. So why does it take you 9 months? If you have a good reason (complex system, lots to test) then that's valid. If it's a simpler system, 9 months to get it to flight test is probably not justifiable.


I totally agree that "process" on its own doesn't justify long turnarounds. If you had one part of your code that was taking 2/3 of the run-time, it should get plenty of scrutiny, and the same is true of processes.


For us the setup is:

- Install docker - Setup GitHub SSH credentials. - Pull the main repo. - Run a script that will pull down related repos, install dependencies, start up a bunch of docker containers, and then run health checks on the app. - setting interactive debugging takes a bit longer, but not too much more.

Unfortunately I’ve routinely dealt with our IT department being slow to give credentials to new employees or shipping them under provisioned or just incompatible systems. No you can’t give our new senior developer the same cheap crap laptop running an ancient version of windows on that you send to the junior marketing person doing cold calls all day.


> by far the #1 "WTF WTF WTF" thing that new hires say is "what do you mean you aren't using $TODAYS_HOT_JS_FRAMEWORK??"

To that speaks of the caliber of programmers hired. If all they have seen is $TODAYS_HOT_JS_FRAMEWORK and wrote nothing but a web app using $TODAYS_HOT_JS_FRAMEWORK they might not grasp the fundamentals that would make then realize that frameworks are just abstractions (and not that different from one another).

I don't think any software engineer would even ask that question, since the answer will almost always be "$TODAYS_HOT_JS_FRAMEWORK didn't exist when the project started, and it's not worth a re-write to port it over".

Now, that brings out a second important truth: a company can't attract and retain a wide range of different caliber employees. For instance, if a place still questions the usefulness of source control (perhaps because they consider git to be too complicated) there's no way they'll attract and retain top performers. So the culture will select people that agree that source control is a waste of time.


I do not understand how teams work without source control. I don't mean that (only) in the "WTF are you doing, I don't understand why you would do that" colloquial sense. I literally cannot conceive how they work.

Do people just... change files and then email the whole file to the other developers and hope nobody else was working on that file? Do they at least have patches?


First professional software development I did, we didn't even have hard drives or networking. Luckily I was the only developer and it wasn't a huge problem.

First place I worked with other people, we at least had hard drives. I don't think we had networking on the machines or version control. For sure there was only one or two machines in the office that could reach the Internet. Mostly only one person could work on a file at a time.

When we did get more employees, a LAN, and version control a few years on, the mid-1990s Microsoft version control software was such a piece of junk it mostly amounted to a formal digital system specifying who the one person who could work on a given file was...


I've seen people exchanging thumb drives.

No concept of a patch. They spent most of the afternoon and evening "performing the manual merge and stabilizing the release", meaning rebuilding and deleting lines until it compiled.

I wish I was kidding.


Email would actually be better than what I have actually seen because it gives a pseudo-version control system, patches or no:

Shared drives and folders with concurrent edits. Sometimes they'd separate them into "mine/2023.02.14" and "yours/2022.12.10" but that wasn't much better. Actually, because people don't seem to grok lexicographical order, or how to write dates at all, the dates are normally 10-12-2022 and 2-14-2023, guess whether the first one was from October or December.


There are less complicated VCSes than git, so this sounds like a cop-out.


> it's not clear how to balance a fresh perspective vs. an experienced (normalized? tainted?) view.

Having a healthy, balanced view comes from enough experiences. Ideally from working in a bunch of places and seeing enough things go sideways, and correctly understanding and identifying the causal chain that led to failures.

Funnily enough its sort of like training an AI - you essentially need a lot of correctly labelled data to learn. Junior engineers don't have enough data points, and unfortunately some "senior engineers" I've worked with took (in my opinion) the wrong lessons from their experiences. (Eg the CTO who thinks version control is too complex.)

The interesting cases are when smart, experienced people disagree on what the best solution is. Should you keep your team small and smart or have a varied team with more mentorship and process? Is code review worth it in every case? What is the right amount of tests for your software? How often do we want to push to production?

When I was teaching programming my students would sometimes ask juicy questions. My favorites were the questions I could answer with "I'll tell you my opinion, but I've worked with people I look up to who think I'm wrong about this..."


If "what do you mean you aren't using $TODAYS_HOT_JS_FRAMEWORK" is the first question they ask, then you can end their employment right then and there. Hire people whose "wtf" you take seriously.


Both parties would feel like they're dodging a bullet.

People still griping about $TODAYS_HOT_JS_FRAMEWORK are clearly out of touch in 2023. It was funny commentary in 2013. It still rang a little true until around 2016. Now it's just an indicator you are the one not to be taken seriously.

It's React, Vue, or Angular and it's been that way for many years now.


React underlies a lot of the “new hotness” like Next.js and Remix.run (I’m not sure if Vue, Angular, or Svelte have equivalents but I wouldn’t be surprised).


I still take those newer frameworks with a big grain of salt, but tbh, that's because of ignorance on the one hand, and fear of sunk cost on the other (e.g. spending time to invest in acquiring knowledge only for it to go obsolete).

Angular is still used in a surprising number of large companies my employer works for, so that, and React, and Vue are all solid investments IMO.

The other ones I think will only be used if one developer dares to take and sell the risk.


There's a whole section about how to balance a fresh perspective, in "solutions". The best way is for an effective VP to hear the WTFs of new hires, apply engineering judgement, and make changes based on that signal. If management is not the ones creating the deviance, they ought to be able to tell what reactions are just unfamiliarity with the system and what are signs of something actually being broken. The article is arguing that most people ignore those weak signals by default, and ought to pay more attention to them, not that they're always reliable.


Yes. From the article:

> “The thing that's really insidious here is that [once a person buys] into the WTF idea… they can spread it elsewhere for the duration of their career… Once people get convinced that some deviation is normal, they often get really invested in the idea.”

> [H]ow does the new hire (or anyone else) know the difference between "learning the complexity of the new system" and "internalizing/normalizing the deviance of this culture"?

The article implies that the new hire should pay close attention to the things that are incentivized, and those that are not.


That's not a "WTF". All front end developers trash their predecessors work and rewrite in the last framework. That just what they do.


In my experience, you cannot change an organization's culture with rules, mission statements, listing values, or giving speeches. They only way is to take down the old culture bearers. Some of them may be managers, but more often they are employees who have gained some organizational power. You'll often find them at the center of sticky organizational spider webs with approval processes such as purchasing and service administrators.

To change the culture, these people have to go. Firing them may not be feasible, but there are other options. Dethroning them in the form of a promotion or even just physically moving them can be effective. When people don't have to jump through their hoops anymore, they lose their organizational power.


> They only way is to take down the old culture bearers. ... To change the culture, these people have to go.

I was briefly head of engineering at a company that had several "old culture bearers" that made change impossible. I was something like the 3rd or 4th engineering leader over the space of a year. Apparently the person after me was actually allowed to fire a few of these people and was able to turn things around.


More than once I had to leave in order for an employer to take action against a problem employee, including my boss once. In these cases the employer suddenly realized that it made no sense to attempt to replace me without removing the reason that drove me out.


That is, unfortunately, very common. Though sometimes the toxic people drive the company into the ground first.


Been told that before. When I spoke up, I was told that I was new and shouldn't talk about things I know nothing about.


There's often a wise tradeoff between criticizing systems you've just seen after being at the company 5 minutes and actually spending some time at the company to learn the historical context of why the thing you think is insane/shit is insane/shit before telling everyone who built it how insane/shit it is.

People generally don't wake up in the morning and go into work motivated to make insane/shit things - context, tech debt and business realities all mount up and even the best of us can end up making choices that in isolation look crazy.

There are of course companies who are really bad and you may well be right, but so many times I have seen in my career a young new hire storm in and think everything is shit without paying heed to the context and historical pressures. The best thing you can do in many cases is spend the first ~six months at a new tech company trying to understand that context, and indeed I think more mature engineers generally do.


> There's often a wise tradeoff between criticizing systems you've just seen after being at the company 5 minutes and waiting 6 months to understand the context.

I know this is reasonable advice, but it makes me deeply cynical. After 6 months I will have learned to live in the shit (to use your term), and so it still seems like I have nothing to gain by speaking up or trying to fix things. A culture that accepts shitty code probably isn't supper demanding for an experienced developer who is accustom to the mess, so I'll just coast through my time and hop jobs after a few years.

If nobody wants to respectfully talk about my criticisms on day one, then they wont really want to at 6 months either. In the end I'm lead to believe I should have zero concern for code quality and only worry about my personal reputation.


This is exactly where I am at right now. I have a million dollar mortgage and interest rates are going up. I'm keeping my head down and perpetuating technical debt until I can hop jobs for a higher salary.

Criticising the status quo is not a winning move for me, especially when it's lead to the company's engineering team tripling in size. If I'm asked, I'll pick some low hanging fruit– remove reliance on legacy/redundant JavaScript libraries such as jQuery, and spent time writing better unit tests. But so far I haven't been asked.


You (in fashion of the article) missed the point I made: I was explicitly asked to speak up as the new employee and when I did I was told to stop speaking up. When I brought it up in the meeting I gave my two week notice, he admitted to saying that and apologized.


That's some bullshit. you did the right thing. When a new employee joins and notices that something sucks the answer should be something like these:

We know, but haven't had time to fix it, maybe we'll assign that to you when you're caught up.

We didn't think of it that way, good catch, lets go into detail later.

Yeah, but doing it this way makes this other thing easier, we'll show you that when you're ready.

or even: I don't know, my brain is fried with this project, can you ask again in a few months?


I get what you're saying, but this is exactly how deviances are normalized. When you've been with the company for a long time and are familiar with the history, it's easy to rationalize why things are the way they are and that they can't be improved. You can explain something that's crazy with context and historical pressure.

Dan's point is that sometimes the new person's judgement is correct, and there actually is a real problem that's invisible to people who have been with the project a long time. But the new person's judgment is basically always ignored, and that's a mistake - it ought to be weighted heavily because they legitimately have a perspective that insiders no longer have.

If instead you spend six months trying to understand the context:

"new person joins

new person: WTF WTF WTF WTF WTF

old hands: yeah we know we're concerned about it

new person: WTF WTF wTF wtf wtf w...

new person gets used to it

new person #2 joins

new person #2: WTF WTF WTF WTF

new person: yeah we know. we're concerned about it."

I'm sympathetic because my first company was a mid-stage startup with huge structural problems in the engineering org structure and processes. When I joined I had frequent "WTF" moments and had a similar experience where experienced people would explain to me why things are the way they are. So I trusted them, and put my head down, but eventually got frustrated and left. A few months later the company went bankrupt because they couldn't build product fast enough, investors lost patience, and they couldn't raise another round.


> the new person's judgment is basically always ignored, and that's a mistake

Remember, the new person has something that nobody else on the team can ever learn, no matter how much they study or how long they work. The new person has a fresh perspective.


"Making good decisions, therefore, requires understanding past decisions. Without knowing how things came to be, it’s easy to make things worse."

https://thoughtbot.com/blog/chestertons-fence


I was once told this after raising the issue that "hey maybe an API that responds with root ssh passwords is a bad idea, and our clients are going to be pissed once they find out." And.. I was right.

So often, citing Chesterton's fence is significantly more naive than what it attempts to criticize.


How is being right about wanting to make a change a refutation of the idea that knowing why people made a decision in the past is an excellent idea? Nothing about Chesterton's Fence asserts that people in the past always made good decisions, nor does it assert that perhaps what was a good decision then is a bad decision today.

It simply asserts that understanding why a thing is the way it is is valuable when making a decision to change it.

That understanding could be as simple as--to take a real world example that most readers here will remember--"They chose to install a hidden web server on the user's system, because they felt it was the best way to deliver user convenience given the resources and time the team had available."

We can still say it was a bone-headed choice to do that because it opened a massive back door to every user's system. And? What is the problem with looking into why they made that choice before arguing that the choice should be reversed with maximum prejudice?

Chesterton's Fence isn't a suggestion that no changes should be proposed, or that if you look into the original motivations you will change your proposal. Think of it as insurance against the possibility that every once in a while, you will discover a requirement that needs to be addressed with your suggested change.

I don't see where you're coming from that quoting Chesterton's Fence is even "criticism." It's a suggestion to take out a little insurance by doing a little homework.


> Nothing about Chesterton's Fence asserts that people in the past always made good decisions, nor does it assert that perhaps what was a good decision then is a bad decision today.

> It simply asserts that understanding why a thing is the way it is is valuable when making a decision to change it.

The second assertion is implicitly an assertion that decisions made in the past are, if not always good, at least good enough often enough to be worth understanding. In my experience that's not true; most of the time it's just something someone did without really thinking about it.


Let's distinguish two different assertions:

1. The decision in the past was sensible at the time given what the people making that decision knew/believed/were incentivized to optimize for, versus;

2. It's worth knowing what was on their mind when they made the decision.

I think the two are independent. It could be that there is no good reason for a choice people made, but it's still helpful to look into whether they had a reason, and not just assume there was no good reason without looking into it. I personally think assuming there's no good reason for a decision without looking into it is "picking up nickels off of railroad tracks."

You save a little time if you don't try to find out whether there was a reason, and most of the time your hunch that there was no good reason will be correct. And some of the time, if there was a good reason, it no longer applies, so you are saving time not looking into that reason.

But once in a while, there was a good reason and it reflects some constraint or requirement that is still relevant. It doesn't mean you can't change the thing, but it does mean that you should address the constraint or requirement as part of your proposed change.

If you never look into the reason, once in a while you will miss something. Another comments suggested "move fast and break things," i.e. Make the change and if something breaks, fix it then. That's a strategy too, but some things don't work that way. For example, some code might fix a bug that applies to one valuable customer, and if you change the code without knowing about the bug fix, you will find out about it via an irate customer.

In some cases, the cost of an irate customer once in a while is much bigger than all the time saved not looking into things. Or maybe it's a security thing, in which case one vulnerability might be extremely expensive to deal with.

I agree with you that not all decisions made in the past are worth taking into account when making changes, but in my n=1 experience looking into things is cheap insurance against the times when there is a hidden requirement or constraint that has material impact on your business. And when I frame it in my mind as insurance, I don't mind looking into 99 things that turn out to be immaterial: The 1 time it is material makes all 100 investigations worthwhile for me.


It seems more strange to conclude that the person with the new idea to tear down the fence is so frequently a better thinker than not only the original builders of the fence but also the other people who came along and didn’t tear down that useless fence.

Yes, that is sometimes the case, but not often enough in my estimation to skip the step of trying to understand why that fence is there.


  We can still say it was a bone-headed choice to do that because it opened a massive back door to every user's system. And? What is the problem with looking into why they made that choice before arguing that the choice should be reversed with maximum prejudice?
Because I was suggesting a different method for the specific task at-hand.

  I don't see where you're coming from that quoting Chesterton's Fence is even "criticism." It's a suggestion to take out a little insurance by doing a little homework. 
Every time I've heard someone quote Chesterton's Fence, it's always been as a means to halt the conversation. Essentially, "shutup" -- an indirect critique of critique itself in dismissive form. There's possibly some meta point here about you not knowing the full circumstances of the situation to warrant bringing up Chesterton's fence.


> Every time I've heard someone quote Chesterton's Fence, it's always been as a means to halt the conversation.

From here forward you can say that almost every time you've heard someone quote Chesterton's Fence, it's almost always been as a means to halt the conversation.

Today, you've encountered a counter-example, and a very firm counter-example, at that. To my mind, Chesterton's Fence is explicitly NOT about shutting down a conversation. It's an invitation to continue the conversation with more information to validate your suggested course of action.

No different than if an engineer suggests, "We should rewrite this code to be faster." What team lead or product manager wouldn't ask, "Is this a bottleneck? Have you profiled it? Do we know there are users impacted by this code's performance?"

Or if someone suggests building a bespoke feature flag service. "Have you done a build vs. buy analysis? What alternatives have you considered before choosing this design? Are there any OSS solutions that are close enough to our requirements?"

These kinds of responses shouldn't be uttered as a way of shutting down a conversation. If that's someone's intent, they are abusing their privilege.

The right way to use any of these patterns is to say them in good faith, and then socialize amongst the team the standard of preparation the team expects of someone proposing a non-trivial change.

Over time, the need to say such things decreases because the team internalizes what preparation/rigor/justification is needed for proposing changes, and does the work ahead of suggesting changes.

Whereas, if the tone and intent is to block change, the team goes down a toxic path where people are discouraged from suggesting improvements of any kind. If that's what you've encountered, you have my sympathy and I can complete understand why you might be wary of people quoting Chesterton's Fence.


  These kinds of responses shouldn't be uttered as a way of shutting down a conversation. If that's someone's intent, they are abusing their privilege. 
To be perfectly honest, your quote came across in that spirit. Anywho, peace!


This is very good feedback, thank you.


I agree. Chesterton's fence doesn't mean that if you don't know why the fence is there, don't ever move it, under any conditions. It means try to find out why it's there before moving it.

In many cases in my career, I've seen code that doesn't make sense or seems like a bad idea. The person who could explain why it's there has long left the company. Am I afraid and leave the screwy stuff there, while citing Chesteron's fence? Hell no. I'll change it to do the right thing. This results in either exposing the reason why it's there, or showing that it really was unnecessary/bad. If something breaks from the change then it's good that I can finally document what wasn't documented before. So either way it's a win.


I readily accept that this strategy has worked for you. In my particular case, I work with customers and software where making a change, pushing it to production, and finding out that I missed a key requirement when it breaks is sometimes unacceptably consequential.

But that may not be true for everyone. If making changes and seeing what does or doesn't break is a successful strategy for you, go for it.


I can think of a dozen reasons why such an API might exist. Chesterton's fence doesn't say "don't make changes" it says "If you can't think of why something might exist, you are not qualified to say that it shouldn't exist."


I never said I didn't understand why the API existed.


Then Chesterton's fence doesn't apply. People wrongly using arguments to enforce blind conservatism doesn't make the underlying arguments wrong...


Then we agree!!


The problem is that the historical context of a decision often becomes a defence of the status quo, even when most people understand it is bad.


It's not (1) "reacting to the reaction" which is the endpoint but (2) not having that reaction. If (1) is important it as because that is the path to (2).

I'd say a company that has accomplished (2) has cut the workload in hiring employees by 30-50% in the sense that every employee who has reaction (1) either internally or externally is at risk for being disengaged or leaving soon. Not only that but you are probably wasting your dev's time and could get dramatically more productivity out of them if you aren't WTFing them to death.


To be fair, you really shouldn't. You know nothing of the constraints that people are operating under, or the political or cultural landscape you're dealing with, so you just come off like a preachy academic.


See the other comment I made: I was explicitly asked to speak up.


If that's the case, then yeah fair enough. If someone doesn't want your opinion they shouldn't ask for it.


That doesn't seem like a healthy standard b/c it grounds decision making in appearances rather than principles and prudential judgements. Certainly, such feedback or opinions can be worth considering as a way of getting at what principles are being violated and deciding whether these violations are tolerable or what ought to be done about them. A fresh pair of eyes could help. But an untrained pair of eyes might also not be qualified to discern the right course of action.


Yes and no. But 2/3 of the time the problem is that the company has no documented build process or something obvious like that . They have time to spend 2 years failing to deliver a product because they don't know how to build it, but when somebody asks "How do we build it?" the answer is "Don't waste our time asking stupid questions." Of course they have been wasting time not knowing how to build the system, the guy who started the project might be able to hit F5 in 15 different windows and get it to sorta kinda work, but new hires they are hiring to work on it are quitting right away and somehow they can never get it into production.

There should be no controversy at all that complete instructions for installing everything required for a dev to build the project and work on it should exist and it should be possible to complete this task in hours, not the weeks that it frequently takes. And, no, "docker" is not an answer to this anymore than "The F5 Key is a Build Process"

https://blog.codinghorror.com/the-f5-key-is-not-a-build-proc...

It is not "Docker" that solves the problem, it is the discipline of scripting the image build process into a dockerfile. If you know how to write a dockerfile you can write a bash script that runs in 20 seconds as opposed to having Docker spend 20 minutes downloading images and then crash because of a typo.

You are right that a company might have good reasons for doing things in an unobvious way, but most of the time when nobody at a company claims to understand what the company is doing except for the CEO and people aren't too sure about the CEO, it is the fault of the company lacking alignment, not a natural property of freshers.


> It is not "Docker" that solves the problem, it is the discipline of scripting the image build process into a dockerfile. If you know how to write a dockerfile you can write a bash script that runs in 20 seconds as opposed to having Docker spend 20 minutes downloading images and then crash because of a typo.

The problem is the bash script may end up depending on poorly understood aspects of the local setup (global config files, installed packages, etc) - it might work fine now, but then nobody runs it for 12 months and there’s some churn in personnel and suddenly people are trying to work out why it crashes. Dockerfiles can avoid some of that stuff, although not always (e.g. the common problem that if you don’t fix the versions of packages to be installed, an updated package is released which then breaks the Dockerfile)


This is something we actively try to take advantage of at my company - we know that we've grown comfortable with architecture that may not make a lot of intuitive sense, so when we have new people join we try to make a list of confusing concepts so we can try to clean them up. In the ideal case the new person is able to do the cleanup, so we get a more intuitive design and they learn the surrounding architecture more along the way


The problem is when you join a high performing team and organization. Is a WTF something they should fix your you need to recalibrate what you think is normal?


There's been a lot of work on reliability of complex systems and how they operate. What has been found is that it is almost always necessary to have failure (degraded operation) modes that prevent system failure, and the more complex and more hazardous failure is the more modes develop.

In these systems it is found that they are almost always operating (or transitioning between) failure modes. Often multiple operational failure modes are simultaneous. It becomes very important to test the system in each of it's failure modes and their combinations to maintain high up time.

https://how.complexsystems.fail/ is an example, but there are many.

Human work, development, and maintenance is itself a system that interacts with these critical systems. Frankly, failure to fail causes failure (thus chaos monkey). The mythical man month is almost a sub category of these failures as are HR hiring processes and other BS. Being too successful and not having competition (or similarly sclerotic competition) can be as much of a hazard as "move fast, break things".


"When a fail-safe system fails, it fails by failing to fail-safe."


This is a big term in aviation, because in most cases in order for something catastrophic to happen it requires a lot of things to have failed. And one way to ensure that enough things fail is to start deviating from your maintenance, inspections, or general responsibilities. Related: the swiss cheese models.


It's also why things in aviation are so fixed and difficult to change. Not having any new civil aviation planes for 30 years worked... how about 40, 50? When will it break? Well, when someone develops an easy to build, easy to fly, inexpensive experimental craft and zillions of people do it all at once (hasn't happened yet).

One of the more interesting things I've found is that a huge number (easily a majority) of instructors are recently trained flyers, because there is a pipeline to train them and they're cheaper than using experienced pilots (esp for multi-engine and more complex airplanes). They also know all the ins-outs of the training and rule books (with recent changes) so they know how to pass all the tests and how to teach that. Sooooo you have a bunch of inexperienced pilots teaching all the new pilots... there's likely a failure there, but it hasn't reared its head. We still have a lot of ex-military folks around who didn't learn that way.

Who do you want flying when things go bad? People who have spent many hours with things about to go bad (military, emergency/fire, sail plane pilots) who have experience dealing with it. Those people can also be fun/terrifying to fly with, because they will take risks.


> Not having any new civil aviation planes for 30 years worked...

The A220, A350, and A380 are all newer than 30 years old. (A321 is barely younger than that and A330 barely older.) Boeing has released the 777 and 787 in the last 30 years. The Cirrus SR20 and SR22 are newer than that, as is the SF50 jet. The Diamond DA40, DA42, and DA62 are newer. The Honda Jet is newer. Cessna has a handful of business jets newer than that. The Embraer Phenom 100 and 300 are newer than that. There are variants of the CRJ newer than that (-700, -900, -1000).

That's a lot of new civil aviation aircraft designs in the last 30 years.


> They also know all the ins-outs of the training and rule books (with recent changes) so they know how to pass all the tests and how to teach that. Sooooo you have a bunch of inexperienced pilots teaching all the new pilots...

Sounds like there's some similarities with everyone focusing on Leetcode interviews, and then one generation of that filtering and then mentoring the next, and repeat.

The companies don't know what that's costing them, until there's a problem that can't be ignored.

In the case of software engineering (poorly studied, relative to aviation) the company will generally never learn whether a non-Leetcode&promo-focused team could've avoided the problems in the first place, nor whether non-Leetcode experience could've handled a problem that happened anyway.


> Who do you want flying when things go bad? People who have spent many hours with things about to go bad (military, emergency/fire, sail plane pilots) who have experience dealing with it. Those people can also be fun/terrifying to fly with, because they will take risks.

Maybe. Or maybe you're better off with freshly trained people who still remember exactly what to do in all the failure scenarios. Certainly I've generally felt safer with drivers who'd just passed their test than with people who've been driving for years, for example.


Until a quite advanced age, I’d much rather be a passenger in a car driven by a driver with years of experience rather than a fresh license.

The stats seem to bear that out as well.

Commercial study: https://www.fleetowner.com/perspectives/ideaxchange/article/...

Teen study: https://pubmed.ncbi.nlm.nih.gov/12643948/

In aviation, there’s a “killing zone” from 50-350 flight hours (with 40 being the typical legal minimum hours for licensing and 60+ being more typical).


> Those people can also be fun/terrifying to fly with, because they will take risks.

Risk homeostasis in action!

https://soaringeconomist.com/2019/10/30/experience-can-kill-...


A thought experiment.

When is it "Normalization of Deviance"? and when is it a "Efficiency Optimization"?

I mean, the difference is pretty clear after something has failed, But very murky before.


It is Efficiency Optimization when you know why the rule is there, and having made an estimation of the risks, perform a cost-benefit analysis.

aka "Chesterton's Fence"

Otherwise, it's "Normalization of Deviance":

* The build is broken again? Force the submit.

* Test failing? That's a flaky test, push to prod.

* That alert always indicates that vendor X is having trouble, silence it.

Those are deviant behaviours, the system is warning you that something is broken. By accepting that the signal/alert is present but uninformative, we train people to ignore them.

vs...

* The build is always broken - Detect breakage cause and auto rollback, or loosely couple the build so breakages don't propagate.

* Low-value test always failing? Delete it/rewrite it.

* Alert always firing for vendor X? Slice vendor X out of that alert and give them their own threshold.


Unfortunately I don't find that most software engineers understand the difference between actually determining costs and benefits and choosing to make certain tradeoffs and rationalizing whatever choice they already made.


I think that's ok, For me, it's more about "change the system, instead of ignoring it".

Once you change the system (document/rules/alerts/etc), then if it breaks, you change it again and learn the lesson. Both are conscious decisions by the org.


I'm not sure if this link has already been posted but have a look at How I Almost Destroyed a £50 million War Plane and The Normalisation of Deviance.

https://www.fastjetperformance.com/blog/how-i-almost-destroy...


In The Field Guide to Human Error Investigations by Sidney Dekker, he quotes someone else saying something like:

> Everything that can go wrong will go right.

Murphy's Law then manifests from escaping disaster through repeated iterations of taking risks where most things play out well anyway.

I have to laugh at the "append z to the end" strat at Google, though. That's a good one.


Here are a few examples from real-world history that reflect problems discussed in the article:

- The Space Shuttle Challenger disaster in 1986 was caused by the normalization of deviance, where engineers became accustomed to problems with the O-ring seals and began to accept them as normal. This led to the eventual catastrophic failure of the shuttle's launch, killing all seven crew members.

- The 2008 financial crisis was caused in part by a normalization of deviance in the banking industry, where risky and complex financial instruments were routinely used without proper oversight or understanding of the potential risks. This led to a widespread collapse of the financial system and a global economic recession.

- The Volkswagen emissions scandal in 2015 was caused by a normalization of deviance in the automotive industry, where engineers and executives became accustomed to cheating emissions tests and misleading customers about the true environmental impact of their vehicles. This led to significant financial and reputational damage to the company.

- The Theranos scandal in 2018 was caused by a normalization of deviance in the healthcare industry, where the company's leaders became accustomed to misrepresenting the capabilities of their blood testing technology and misleading investors and customers about its accuracy. This led to significant legal and financial repercussions for the company and its executives. (ChatGPT)


Reality: It is true that EVERY organization is broken in some way or another.

You have to find the one that is broken in the way that is tolerable to you.

Arguably the closest we know to a panacea in terms of engineering culture and best practices is Google. And what are they now known for? An inability to ship anything meaningful anymore. Spinning around in circles launching and re-launching new chat apps.

These are not unrelated. High engineering standards are always in tension with product delivery. As a security engineer once told me, "the most secure system is the one that never gets launched into production."

So while Dan is right, and all the examples are right, and things like non-broken builds and a fast CI/CD pipeline are totally achievable, don't learn the WRONG lesson from this which is that when you arrive to a company and notice a bunch of WTFs, the first thing you must do is start fixing them in spite of any old timers who say "Actually that's not as bad as it seems". Sometimes they're wrong. USUALLY, they're right.


Great stuff - I think this goes in the "required reading" list.

The tech industry tends to revolve around "I'm a super-rational robotic genius" thinking that can't accept the existence of its own irrational tendencies, to the point that it becomes ridiculous.


The standard "Required reading" text on the subject is "The Field Guide to Understanding 'Human Error'" By Sidney Dekker


Great book! (Although, it could have used a more aggressive editor)

Reading it felt like a personal attack in many places. However, reading it forever changed how I think about things. It's a much more useful framing for everyone involved if you start with the question of "why did they think this was the right thing to do?" as opposed to "this person made a bad choice / mistake". My (extraordinary) impatience naturally predisposes me towards the latter, but the core argument of the book is that that's lazy -- you can hand wave away anything and everything with "operator error".


I welcome others to share stories of the normalization of deviance in their companies.

One company I worked had no unit tests, no infrastructure as code, and no build server. This held strong for a while until enough developers implemented some unit tests, infrastructure as code (e.g. terraform), and a build server as skunkworks projects. Eventually management tolerated them, but never endorsed them. Some teams at the company still never embraced good practices because it wasn't forced on them.

I guess I've never worked at a company that valued unit tests across the whole of the engineering team. I introduced them and implemented them on my own team, but others ignored it.


> and no build server

Personal experience is that a build server normalizes deviance. "But it works on the build server" we used to say, as, with time, it become harder and harder to build locally. "Just fix your environment!" we used to say, when it was the build system that was actually at fault. "It's all so fragile, just copy what we've done before!" we then said, repeating the mistakes that made the build system so fragile.

Eventually, the build system moved into a Docker image, where the smells where contained. But I'm still trying to refactor the build system to a portable, modern alternative. If we hadn't had a build server, we'd have fixed these core issues earlier and wouldn't have built on such a bad foundation. Devs should be building systems that work locally: the heterogeneity forces better error handling, the limited resources forces designing better scaleability, and most importantly, it prevents "but it works on the build server!".


"But it works on my macbook" is even worse than "but it works on the build server". If you have a build server you're at least forced to make sure it builds in two places (your own machine and the build server) before you merge rather than only one.


> “But it works on the build server" we used to say, as, with time, it become harder and harder to build locally

This got me puzzled for a couple of minutes. Yeah, that “WTF, WTF” moment. Then I realized that our build “server” comprised of 12 different platforms (luckily reduced to just 6 in the later years), so to pass a build in production was a bit harder than to build locally.


Has Dan Luu ever explained why he doesn't put dates in his blog posts?


Most of his posts seem to be date-independent. To the extent that it matters, you can check the homepage: https://danluu.com/. There you will find month and year of the posts.


If you enjoyed this - I highly recommend watching Adam Curtis' Can't Get You Out of My Head: https://thoughtmaybe.com/cant-get-you-out-of-my-head/


As a new hire there is a line to walk between on one hand, using your outside/fresh perspective to provide valuable insight to the org, and on the other, complaining (or appearing to complain) about decisions where you don’t have full context.

Many of the examples in the OP are probably closer to the former, but my general advice here is to keep lots of notes about what seems broken, and revisit in a month or two. Sometimes you gained context that explains why something is actually sensible. If it still seems crazy with context, you can now bubble up the feedback with confidence, and also having hopefully built some respect and trust from the team to make the message land better.


Write total shit for code, then look like a 'genius' for 'fixing' bugs, only to have them come back again in the future (further looking like a clown to the rest of the team)


I'm still reading, but I just got to the part about flaky, and I got annoyed because there are clear use cases for flaky or pytest-retry.

If you have an integration test that relies on an unreliable system you do not control. Sure you can mock it out for a unit test, but if you want to make sure you catch breaking API changes, you need to hit the actual system. And if it works after retrying it a few times, then so be it. no need to throw shade.


I'm going to push back and say that test is not a valuable automatic test. The phrase "relies on an unreliable system" captures that lack of value.


When the code your testing is a client for some remote API, and the sandbox/development/Testing version of that API doesn't have the same resources and uptime guarantee as production, then what are your options? as far as I can tell they are:

Don't test it.

Only do unit tests with the connection mocked out.

Test against production.

Try it a few times with a delay, and if it works then you know your code is good and you can move on with your deployment. Which is what flaky and pytest-retry do.

Maybe I'm missing something, but out of those 4 options retrying the test seems like the best one, with the big caveat that it is only viable if the test does indeed work after trying a few times. I really don't see any downside.

edit:

Maybe another option is to put the retry functionality directly in the client code, which would make your code more robust overall. but that is definitely more complex than using one of these libraries just for testing.


You're on the right track. It's a perennial favorite of devs to abhor flakyness, whereas after spending enough time as a tester, you come to terms with the fact that you have to take your tests as a statistical probe because most places test systems are simply not that reliable; sometimes, this is even a design feature.

This experience as a tester is in fact a normalization of deviance from the ideal computation model of a developer. Everything should work the first time everytime from their point of view. The tester sees reality as it is. The Emperor won't fund my test systems sufficiently to service all my customers, so we make do ss best we can. Bonus points in that we get to exercise the edge cases.


I think the issue is people using it when they’re too lazy to fix the test case.


I find quite interesting that people will prefer a highly malleable language like Python, and then orgs have to adopt testing to get around all the inconsistencies caused by absent type system. And then people will write libraries to get around the pesky tests to get their flexibility back.

It's fascinating really... Complex systems are always in partial failure mode and that applies to collective optimization challenges. Organizations will always be stuck in local optima in most domains.


Type systems do not replace testing, and if a test works after retrying it then it is probably not something that a type system would be able to catch.


> Type systems do not replace testing

They're a good substitute for many of the use cases of testing.

> if a test works after retrying it then it is probably not something that a type system would be able to catch.

Type systems are pretty good at catching incorrect concurrency logic these days, and getting better all the time.


Yes they are not equivalent, but creating languages without a strict type system and the proposition of test driven development are going back and fort in strictness. It is beneficial to have a more strict programming language, but it can be uncomfortable and burdensome. So I find interesting we keep going back and forth in this dimension.


formatted for wide monitors https://ddanluu.com/wat


i routunely remind everyone,"we're all mercenaries."

i have marginal control over who i manage. The Product isnt saving the world, but it is allowing us to live reasonably and with a clear soul at the end of the sprint. The reason i say the "mercenary" bit is simple: weigh your dreams against blood and gold and compromise.


AOPA: Normalization of Deviance in Aviation

https://www.aopa.org/news-and-media/all-news/2015/december/0...


This rang so true for me. I’m constantly rediscovering things that I already understood well at my previous job. Once you get acclimatised to the current system, so many previously obvious learnings fall by the wayside.


"rictus of horror" - What a set of words to describe a response.


I couldn’t finish reading this. Horror film. A screenplay to dystopian dread.


Our society places an enormous value on "competitive obfuscation".

As an obvious result, our society does an incredible amount of work maintaining that obfuscation.

---

I've heard estimates that 20% (1/5th) of all healthcare-related spending in the US is overhead from insurance determinations, paperwork, etc., and that that 25% (25%/125%=1/5th) extra spending (relative to 100% of the rest of healthcare expenditure) does not exist in single-payer healthcare systems, like those used in Canada, Germany, and every other developed nation in the world.

What do we get from that extra spending? What substantive difference does that obfuscation provide?

The main difference I see is "explicit opportunity cost". Instead of deciding ahead of time that we will pay for any arbitrary healthcare need (as a single-payer program), the opportunity for each individual healthcare act is given a price, and groups of priced opportunities are provided by subscription-based insurance plans.

Every person has to find, apply for, and pay for an insurance plan that will meet their current and future healthcare needs.

Because that is explicit, there is leverage available to manipulate each opportunity cost, and even the opportunity of each person to have that opportunity provided to them.

So what does that leverage even look like, and who is using it, and for what purpose?

Politics. Instead of care being determined by your doctor, access to each type of care is explicitly made available (or unavailable) by your insurance plan. That's a huge attack surface for political motivation.

There is currently a dextroamphetamine (Adderall) shortage in the US. The other day, I went to my pharmacy to pick up my prescription for 30 generic Concerta (methylphenidate extended release, another stimulant medication used for ADHD), and learned that all they had left were 16 brand-name Concerta. I was lucky enough to have that covered by my insurance. Many different insurance plans would not have provided me that opportunity.

Why is there a shortage? Despite a significant increase in ADHD diagnosis last year, the DEA refused to raise the limit of Adderall that can be legally manufactured. Why? Because there is a long-standing political conflict between stimulant addiction prevention and ADHD treatment, and the DEA is positioned at one side of it.

That same political conflict is why some insurance companies have outright refused to include coverage for stimulant medications. Even without a nationwide shortage, some people have found themselves stuck in a position where the opportunity for medication is held just out of reach by the political decision of their insurance company, or the lack of access to insurance at all.

The same pattern can be found with practically every type of medical care that is politically controversial: contraceptives, abortions, hormones, etc. Even if you can't get a legislative ban, there is still leverage available to obfuscate opportunity itself.

When conservative politicians argue that a single-payer program would be "too socialist for America", the substantive difference they intend to preserve is the political leverage that is baked into the system we have; the political leverage that allows politics to restrict our medical care without a single vote.

---

That's just one example. This pattern is everywhere. The only answer is social objectivity. It's a hard problem.


This guy needs to organize & format his writing better, since he does have really interesting things to say.


"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."

https://news.ycombinator.com/newsguidelines.html


If you don't have a reader view in your browser, just paste this into your global CSS:

    p 
    {
      line-height: 1.7;
      max-width: 60em;
      font-size: 1.2em;
      margin-left: 5em;
    }

It pretty much fixes the default readability which is essentially zero on this site otherwise.


I don't like fucking with the default CSS, my alternative is this bookmark that injects Javascript in the page and temporarily fixes the formatting at one click of a button

    javascript:(function(){ var bod = document.getElementsByTagName("body")[0]; bod.style.margin = "40px auto"; bod.style.maxWidth = "40vw"; bod.style.lineHeight = "1.6"; bod.style.fontSize = "18px"; bod.style.color = "#444"; bod.style.padding = "0 10px"; })();
And for a dark mode:

    javascript:(function(){ var body = document.getElementsByTagName("body"); var html = document.getElementsByTagName("html"); var img = document.getElementsByTagName("img"); body[0].style.background = "#131313"; body[0].style.opacity = "1.0"; html[0].style.filter = "brightness(115%) contrast(95%) invert(1) hue-rotate(180deg)"; img[0].style.filter = "contrast(95%) invert(1)"; })();


Dan Luu talks a bit about this here: https://twitter.com/danluu/status/1115707741102727168


One of the more hilarious takes I've seen. "There are no papers for this, and I choose to disregard the countless number of people who say it is much easier for them to read if the line lengths are constrained as they are in a book or scroll or every other form of human writing ever put on this earth, so I will not make my site easier to read. F you."


I really don't think that's what he's saying. You are assuming a great deal of malice, rather than positive intent. What he's saying is that there isn't hard evidence that shorter lines are more readable, so he made the style choice of longer lines. You're claiming that most people prefer shorter line widths, but again present no evidence that most people actually have that preference, other than vague references to "countless people". I actually think you're probably right, and if you had data Dan might update his stylesheet. But in the absence of evidence, you're just presenting your opinion as fact, and assuming malice.


This is why it's nice to get people's preferences vs thoughts about preferences. Preferences aggregate, thoughts about preferences do not. If one designer says "Readers like narrow columns more, everyone knows that." and another says "I like reading narrow columns.", I'd give the person speaking about readers in general more weight (even with them going the same direction). But, if 100 designers spoke for all readers and 100 designers spoke for themselves, I'm giving more weight to the second group. Hearing 100 preferences is more valuable than hearing one idea, 100 times.


Yeah, that's probably true. He also at least allows people to set their own reading width by adjusting browser.

My frustration stems from the fact that I find the argument "there are no papers with sufficient evidence" to be pedantic bullshit. Like yeah, sure, you aren't even wrong, but absence of evidence is not evidence of absence. I've never seen anyone claim to like 180 char lines, whereas I've seen hordes of people who say it is very difficult for them to read that line length, and prefer something book-sized (lengthed?).


Hmm, yeah I actually agree with that way of putting it. The evidence that does exist are the anecdotes, and he seems to ignore that evidence. Many of his readers, myself included, seem to prefer fixed line lengths. So it is a weird choice.

I was mostly reacting to the assumed malice in the parent comment. Based on his blogging style, I think it's more likely that Dan's just a pedantic guy implementing his personal preferences on his personal blog :)


"Not even wrong" FTW. There's even a blog with that title:

https://www.math.columbia.edu/~woit/wordpress/


I prefer longer lines, and hate sites that force a narrow line length.


Buried in the replies is this one: https://twitter.com/jmq_en/status/1255994757773324289

Which is basically my understanding of "too long" lines: The problem doesn't have to do with the length of the line itself / reading one line (which is what most people seem to focus on), it has to do with reliably returning to the beginning of the next line instead of accidentally drifting up or down.

So it wouldn't be much of a problem if there were other visual indicators (code lines have unique shapes instead of being a big block of text, and paragraphs with a blank line between them let you see more easily "I'm going from line 2 of 4 to line 3 of 4" so you don't actually have to track the line sideways. It's tracking the line back to its beginning in a big block of non-code text that's difficult.


Great thread!

Designer says: my opinion is right because A. Dan shows: A is not factual. Designer says: my opinion is right because of B.

Dan notices behaviours, and then he writes compellingly about those behaviours.


yes, if you ignore a millennium's worth of publishing wisdom and think the world of text began anew in 1995, then maybe you can claim that "the science is not settled."

He has every right to make his choice. As do we, in deciding whether to read it.


Yuck! I hate it when sites monkey around with max-width. I've got a nice 27 inch monitor. I want to use all of it. It's refreshing to see a site that doesn't insist on second-guessing the width that I set my browser window.


So you look like you watch a tennis game when you read text? Funny.

On a more serious note: A maximum length for text makes sense ergonomically, which is why especially big prints like newspapers or magazines work with columns. Columns however haven't really cought on in the web, because they do not combine very well with the whole scrolling thing.


Too hard to read. See newspapers. Multiple windows are a thing.


So, leave the choice to the user. If I want to read text in a small column, I can easily resize my window. Don't try to force one way.


No one reads well with 200 characters per line. Been demonstrated that they read a lot slower. Even if line height has been increased to alleviate, which it probably hasn't. I'd recommend multiple columns before super-wide text blocks.


Firefox has the "reader view" option toggleable with F9 for when you stumble upon unreadable designs, if you want


It's F9 on Windows, command-option-r on macOS, and ctrl-alt-r elsewhere.

https://support.mozilla.org/en-US/kb/firefox-reader-view-clu...


This is one of the few sites I find immediately and highly readable (on mobile). Does it not wrap lines on larger displays?


that site is better than most, imho.


[flagged]


Probably being downvoted because the HN guidelines explicitly say to not comment about such things:

"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."

https://news.ycombinator.com/newsguidelines.html


As a matter of fact, it's in positive territory right now.

No, it's not a tangential annoyance. There are times when form IS function.


Maybe because it's an absolute statement ("this person needs to organise & format his writing better") for a something mostly subjective.

In my opinion the website is readable, fast, lightweight, not distracting and pleasant to read. It's also accessible for people with disabilities, responsive, and works everywhere. I understand that not everyone is as into minimalism as me, just pointing out the problem in "this person needs (...)".


[flagged]


Talk about killing a fly with a sledgehammer. Simply needs one or two lines of css, max-width and possibly font-size.


I had two words there: "organize" and "format." This addresses only the latter.


The piece has several headings. Don't believe a blog site is going to magically add more to organize it.


[flagged]


Can you please not post unsubstantive comments? It looks like you've been doing it repeatedly, and we're trying for something else here.

https://news.ycombinator.com/newsguidelines.html


Dang, I try to behave, I really do. That little quip so clearly suggested itself it would have been physically uncomfortable not to make it. I will re-read the guidelines today and redouble my efforts.


It is not.click.


> Have you ever mentioned something that seems totally normal to you only to be greeted by surprise?

Once some (foreigner) person was surprised at my dipping toast with Nutella in my latte. I was equally surprised by his surprise.


> It's technically possible to use @flaky for that, but in practice it's used to re-run the test multiple times and reports a pass if any of the runs pass

This is useful and fine. Someone wrote a test and it now hits a race condition or something and occasionally fails. Let’s assume we are very confident it is problem with the test not the product.

Choices:

Spend a sprint trying to fix it right now regardless of priority.

Turn it off and lose that coverage.

Buy some time.

In this context it makes sense. As long as their is a procedure to address these in some sane timeframe.

Maybe that is an example of normalization of deviance. But I think if it is discusses and trade offs thought through it is an OK thing to do at times. Remember most development is not green field. You inherit a system when you start a job.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: