Hacker News new | past | comments | ask | show | jobs | submit login
Don't Feed the Thought Leaders (earthly.dev)
795 points by crummy on June 11, 2021 | hide | past | favorite | 280 comments



Being a hedgehog is useful on the journey to domain mastery because sticking to frameworks saves you time and headache compared to not having any frameworks at all.

The 3 stages of domain mastery:

Stage 1 - No knowledge or structure to approach a domain (everything is hard, pitfalls are everywhere)

Stage 2 - Frameworks that are useful to approach the domain (Map to avoid pitfall areas)

Stage 3 - Detailed understanding of the domain (in which you can move through pitfall areas freely and see where frameworks fall short)

Hedgehogs are at stage 2. You move from stage 1 to stage 2 by adopting frameworks; hence, hedgehogs are seen as "thought leaders" because they teach the frameworks that lead MOST people to more mastery. Except when you're at stage 3, in which case frameworks lead you to more inefficiencies compared to your own understanding.

All good decisions must be made by stage 3 persons, but ironically, training is most efficiently done by stage 2 persons. Hedgehogs get more limelight because 90% of the population is at stage 1 and values the knowledge of stage 2 (and can't grasp the complexities and nuances of stage 3).

Many hedgehogs struggle to touch stage 3, and instead see stage 2 as mastery. This is compounded by the positive feedback loops of success - the frameworks save time, it gives them reputation, it allows them to save stage 1 persons from their ignorance, and it's the foundation of their current level and achievements. Frameworks are also convenient and broadly applicable to many problems; detailed domain mastery in contrast is difficult, time consuming, and highly contextualized.

All of this makes it hard to move beyond stage 2 into stage 3.


To be a good X you must follow the rules. To be a great X you must have followed the rules so well as to learn why they’re there and when they should be broken.

Works for almost any X - writer, programmer, driving, etc.


> To be a good X you must follow the rules. To be a great X you must have followed the rules so well as to learn why they’re there and when they should be broken.

In my experience slavishly following the "rules" or "best practices" can actually be worse than never following them. Not understanding when it's good to deviate usually means of a lack of understanding of why the "rules" or "best practices" exist in the first place. So much attention is spent following the letter of the rule rather than what problems it was meant to solve.

Look no further than modern day "Agile" vs the actual Agile Manifesto


At the extremes, this is known as cargo-culting. People who have only the most shallow understanding of the rules in my experience also push back the hardest against anyone who "breaks the rules", and get locked into the 'one right way' even when it's actively harming the delivery of a project.


"Rules" often contradict each other. For example, in software development, "keep it simple" often directly conflicts with "dont repeat yourself" as it takes increased complexity to achieve reuse of code. I often see keeping it simple take a back seat in newer developers as they slavishly follow the don't repeat yourself trail. The end result is routinely a brittle, rigid, and overly complex solution.


DRY and KISS talks about different things.

DRY is about knowledge management. The intention is to not avoid duplication of code which is a knowledge representation of the same thing. It does not mean every coincidentally similar looking code must be made into a function, as it would result in a premature abstraction, etcetera etcetera. It's why "minimum three duplication" principle is effective, because it filters which duplication represents the same knowledge and which does not.

KISS is (ironically) a more complex topic. C2 wiki talks about this in an interesting way. https://wiki.c2.com/?KeepItSimple


DRY I would argue doesn't conflict nearly as much with a nuanced approach. "Apply Aggressive Compression to Your Code" certainly flies in the face of many other nice features of code as a working body, though

My more nuanced take I guess is "don't duplicate logic, except if it may in fact be mutable data". It does take some forward thinking to think that this e.g. list of 10 rules should be unrolled and not compressed by meaningless helpers, etc


> For example, in software development, "keep it simple" often directly conflicts with "dont repeat yourself" as it takes increased complexity to achieve reuse of code.

KISS and DRY are different paradigms, i.e. come from different schools of programming.


KISS -

Most people would not recognize simple code if it hit them I'm the mouth.

KIES (keep it easy, stupid) is what they mean, and that means following idioms, frameworks, that are familiar.



Don’t repeat yourself or making code “DRY” sounds/sells so much better than tightly coupled.


Most people agree with this. The actual challenge is to discern real rules from superficial bullshit. That is a level of criticality many people do not possess and some find hostile or disgusting.


Raymond Hettinger talks about this in "Beyond PEP8", which is one of my favorite conference talks

https://www.youtube.com/watch?v=wf-BqAjZb8M


That's a great talk. Thanks for sharing it.

He does well to explain how PEP 8 can be good but some people focus on it because they may not have experience to contribute otherwise.


That talk is fantastic, and I don’t even write Python. Thanks for sharing!


Another challenge is that it's very easy to think you understand the rationale behind the rules well before you actually grok them.


At work I learned a theoretical set of best practices for the software we integrate with. I was very enthusiastic about applying them. This caused me to learn the real set of best practices, mostly leaning on the stuff that works most robustly (even to recreate other features, usually ones that prove inflexible when pushed).


especially when there are too many such rule lists often contridicting with each other. Finding the right list becomes the key for someone (beginner to a domain) who has to follow the rules. That's just luck.


It’s even more implicit than that. For example there are social criteria and considerations of vanity.

Another example is that I recently wrote a comment that received greater than 10 votes and was flagged (I presume by users) almost immediately. The comment was thus both commonly agreeable (perhaps people could empathize to the subject matter) and yet a violation of social norms. An unspoken social rule violation.

That comment also received contradictory responses: I’m not woke, but.... Contradictory responses are generally an explicit mention of internal confusion where the person cannot even agree with themselves on social conformance: I’m not lying, but.... If it’s not complete bullshit then don’t posture with a qualifier.


Reminds me of a quote from the Mustard Seed Garden Manual of Painting:

"Some consider it noble to have a method; others consider it noble not to have a method. Not to have a method is bad; to stop entirely at method is worse still. One should at first observe rules severely, then change them in an intelligent way. The aim of possessing method is to seem finally as if one had no method."


Applies really nicely to music. First you learn the rules so you know when you can break them. But if you start breaking the rules from the start, people will think you have never learned them in the first place.


To throw another quote into the mix:

“Learn the rules like a pro, so you can break them like an artist.” - Pablo Picasso


A lot of good ways of looking at this, and your phrasing @bombcar it's really nicely put.

In my opinion, once you play a game for a while, it's easy to know exactly when and how to break ANY rule: you must understand what the rules are there for.

Think of any agile framework, a ton of them are practiced like cults by the most. However, if you think that those are there for 2 simple purposes:

  1. you want to ensure nobody in the team is ever idle
  2. you want to ensure everyone is working on the most relevant available task that hasn't been picked up from others
Then it's easy to see which rules are useful to the goals and which are hurting you, thus you just gotta break them!

This is just an easy example, this is really about anything in life though!


You have taken a completely different 2 key lessons from agile than I would have picked.

I think mine are closer to the agile manifesto, but perhaps yours are closer to how agile is actually practiced in most places.

Problem 1: you want to ensure that you build the right thing.

Solution 1: you should get feedback from end users as frequently as possible.

Problem 2: you want to be using work processes that are appropriate for your team and task.

Solution 2: teams should be able to modify their own work processes.


The Matrix.


To be great at X you have to write the rules.


Not really, Messi is great at soccer, he didn't write the rules of the game, but he was able to do things with the tools available that no one thought possible. To be great you usually have to be creative.


Messi can teach the ones who teach football. Not the best example. He was doing unbelievable things when he was a literal teenager. A surprisingly better example would be Cristiano Ronaldo. Ronaldo was absolutley frustrating to watch in his early career. It was someone trying hard to do flashy moves without any end product. He needed guidance, rules and structures. He also slowly transitioned from Winger to Forward/Striker. He is immensly talented, but no bigger talent than top talents in every era. Messi is on a different level. He is an all time great. Cristiano achieved what messi achieved because Cristiano is disciplined in a way that noone before is. The guy is 36 now and just topped goal charts in Italy.


I literally have no idea what you mean by 'Messi can teach the ones who teach football' but my comment was not an invitation to discuss who is better. The rules I am talking about are the ones like 'you can't kick the ball out of bounds', 'you have to score between the goalposts', 'this is a corner kick', 'this is offsides'. Neither Messi nor Ronaldo made those rules, but they are all time greats in their competency.


No wait, he has something here. Not about who is better; that’s irrelevant. But, on “rules”. What’s being discussed here are “best practice” rules —- strategies of social construct that are sometimes smarter to bend or break, or maybe can be changed entirely if you can prove a more successful method. The “rules” you’re mentioning are perhaps more akin to physics —- fundamental laws that govern the game. You really aren’t supposed to break those, regardless of talent or success.


I meant he could mentor the mentors in his teens - an age he should be getting mentored. He was that good, and surprisingly mature. He was a finished product in his teens. Cristiano wasn't. He was just one of the many talents among robinho, robben, rooney, van persie, fabregas, quaresma (to name a few i remember) - all of them rated better than cristiano.

And by rules, I don't mean offside, handball, freekick rules. I mean how to hold the line, when to dribble, when to shoot, to hold formation, play as a team, end product etc.


To be legendary at X you must go beyond X and invent Y, by creating a paradigm shift that echoes throughout your field.


Also the social benefit of staying in stage 2 is strong. Stage 3 folks don’t mingle together easily, not just because there are less of them, but also because they have in some ways become themselves. It’s much easier for people to stick together when they believe in something together, whether it’s imperfect or not.

And then some stage 3 folks make a new stage 2 thing and the cycle continues. But I think people don’t want stage 3 attainment by giving up the social buffer (which is very reasonable, as being a social animal in most cases is better life path than a lonely scholar/innovator).

EDIT: I think what I am talking about applies better to spiritual/philosophical/psychological attainment, rather than technological. The effect is probably still there for less spiritual things like tech or writing, but probably less so.


I can subscribe to that.


> Hedgehogs are at stage 2. You move from stage 1 to stage 2 by adopting frameworks; hence, hedgehogs are seen as "thought leaders" because they teach the frameworks that lead MOST people to more mastery. Except when you're at stage 3, in which case frameworks lead you to more inefficiencies compared to your own understanding.

While interviewing recently, I've found a similar anti-correlation between general competency and people who focus on teaching frameworks and libraries.

The more competent candidates basically don't do any teaching based on frameworks/libraries (but they might have experience mentoring individuals); whereas the candidates who focused on teaching frameworks specifically (often to groups) were the least competent - the more they focused on teaching the less competent they seemed to be! I found this kinda surprising and worrying, though my sample size is fairly small. To clarify, I know only so much can be evaluated in an interview, this is basic competency.


There’s an old pithy quote about this that I won’t repeat here. I’ve seen the same results, but I also know that I may be totally underestimating their competency in an interview: I am not always competent to judge someone’s competency. Keep an open mind —teaching can become a skill trap like any other and it may take time for people to readjust if that’s no longer their primary responsibility.


Frameworks are not domain knowledge. They look good on a resume, but you're being hired to deliver software, not play with React all day.


Your response to this article about how contingent advice is usually better than universal advice is to propose a universal theory?


I did notice the irony myself :)

The article talks about contingent advice being better than universal advice only in stage 3. If you're not then universal advice is helpful. I think that holds true for most people and most subjects, including myself.

Originally, I thought the article did a great job describing a common scenario that occurs usually in decision making and I wanted to describe my intuition on why I think it comes about. It's not really a universal theory, more my own digestion / explanation on the interrelation around hedgehogs vs foxes and my own interpretation of the issues that the articles describes.


That makes sense!

I just realised my reply may have come across as mean-spirited rather than the light joke I intended. Sorry about that.


no worries! It is rather ironic.


A universal theory is not the same thing as universal advice. What would one universally do just because one knows the universal theory?

...Now that you bring it up, the OP is offering a piece of universal advice. The irony seems stronger there. Not sure if that invalidates his advice or not. Probably just invalidates taking it as a hard and fast rule.


No, they just taught you, a person with a level 1 understanding of domain expertise, about how domain expertise works ;)


I'm sure there are contingencies that can make contingent advice worse than universal advice.


Sure —- in situations where time is critical, for example, fast, suboptimal decisions can be more effective than slow, optimized decisions. So having something general drilled into your head can, in those cases, be preferable.


> detailed domain mastery in contrast is difficult, time consuming, and highly contextualized

In its defense, it's also extremely intellectually gratifying


Also: often good knowledge allows to see similiarities in other domains - feels like mastery/intuition/domain inteligence :) - it allows to instantly grasp new concepts in new domains. Works like isomorphism in math.


A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies.

Stefan Banach


Sounds like category theory.


This is a useful way to frame it. I would add here that stage 2 is the likely the most comfortable and that the migration from stage 2 to 3 has an embedded disincentive structure.


While this progression may exist, it is not what I took from the article, nor the original idea behind the hedgehog classification.

It is more about some thought-leader being keen on blockchains, machine learning, supply-side economics, or what have you, and looking at every problem/situation through the lens of wanting to apply this technology/policy to solve it, possibly ignoring the downsides/details/side-effects.

The article gives the fictional example of a project “just needing a relational database” but the “domain expert” trying to push them to use SpringySearch because that can also work as a relational database (and because this hedgehog is sold on SpringySearch).


But why should I trust your 3-stage framework?


What makes you question it?


This also accounts for why you see such a noisy contingent of people clamoring for difficult things (ex: software architecture) to be reduced to a discrete process: it suits the way they are currently learning, avoids sunk costs in said learning, sidesteps the hard question of, "do I really know this?" and maintains the current social status quo.

At stage 3, things aren't necessarily easy, but you have the skills to navigate much larger amounts of uncertainty than stage 1 or 2.


This is pretty brilliant.


So frequently do I see the hottest new ECMAScript feature presented by one these folks on Twitter, and I'm thinking how out of the box, older browsers won't support it, and most junior developers don't have a great grasp on polyfills or transpilers. So the first thing I'm seeing is limitations in how I might use it in a production environment (or the small but non-zero work to update my tooling to properly support ES5), but most of these folks don't seem to develop in environments requiring older browser support, but in either case, the nuance always seems missed. It's a caricature of the depth that we sometimes need to dig into.


For the stuff I work on, if the user's browser can't handle it then it's 99% sure they are not my target market in the first place. Of course YMMV.


I think this progression could make sense as a personal learning progression. The real world is probably more complex than this though.

See for example https://news.ycombinator.com/item?id=27468360 where the person mentions pressure to simplify advice and Tetlocks own work shows that the hedgehogs were the more famous and successful people. So some people may migrate backwards by simplifying a message for maximum impact.


This is an excellent break down. Remind me a lot of the mithril comparison page: https://mithril.js.org/framework-comparison.html

Definitely at stage 3, could also be the reason not as many people using it.


Domain mastery seems to be not as sequential as you stated. Stage 2 and 3 can happen at the same time.

If one stays open to new idea while at the same time keeps asking why that idea can be good or bad, one can escape the stage 2 trap.


What you've described suspiciously resembles the 'midwit' meme.


This ist a great framework to think about domain mastery. Thank you!


Great post, adding that the vast majority of hiring is based around stage 2 which creates a feedback loop of resume-driven development.


This is absolutely spot on, great interpretation.


I love this comment. IMHO would be perfect addition to original article


Thanks


> All of this makes it hard to move beyond stage 2 into stage 3.

Often, moving to the stage 3 is a waste of time and resources. There are a few cases if your business's main secret sauce is stage 3, but for most other things - commoditize and focus.

I've seen so many teams and engineers trying to master stage 3 but no real business need and ROI. Engineers love mastering things but good leaders guide them in the right things to master and avoid getting addicted to useless stage 3 expertise.


> commoditize and focus

You say that as if there were any agreement about what that means. But the point of the article is that this is not true; you'll always find someone that insists on using technology (or technique) X for your project, only technology X will be different for each person. Moreover, X might be actually making the project harder to understand, longer to develop or have other significant downsides.

The point of reaching step 3 is not necessarily to be able to develop your own custom solution (although that can be sometimes valuable), but to able to pick the right technologies for a given project given the myriad options available.


Here's how I interpret "commoditize and focus": standardize on a well-established and popular solution that's known to be widely applicable, so you can focus on adding business value in your product. Don't go for the new shiny thing being promoted by thought leaders, but an old workhorse. Do it even though you know the old workhorse is suboptimal on one or more technical axes, because in the bigger picture, the third-party ecosystem, talent pool, and other correlates of popularity and maturity are more important.

I've repeatedly failed to follow this approach myself and have come to regret it. As I mentioned the other day, I chose Pulumi over Terraform for infrastructure as code, because Pulumi is undeniably technically superior in some ways. I now believe that was a mistake.

I'm trying to figure out how to implement the "commoditize and focus" approach when it comes to standardizing on a web application framework for my company. I believe the classic server-rendered, multi-page approach is the right one most of the time, so I'm talking about that kind of server-side web framework. I implemented our current product in Elixir using Phoenix, and I now believe that was a mistake as well. If we suppose that the top three server-side workhorse languages are Java, Python, and JavaScript via Node.js (as implied by the availability of a certain Amazon SDK for those three), then I should choose a popular framework in one of those languages. If I still try to hang onto technical merit within these constraints, then I suppose I should go with Java and Spring, since Java is statically typed and the JVM is high-performance. But Spring has such an enterprisey smell about it. If I go with what I already know, that would be Python and Django. But CPython has such awful performance, dynamic typing has known drawbacks at scale, and choosing something just because it's what I already know feels far too much like becoming old and set in my ways (I'm 40). And a strike against Spring and Django, as well as Rails and others, is that they require knowledge of a different language on the back-end than the one we already have to use on the front-end. So I guess I should go with Node.js. Maybe Express with a view engine like Pug? But to really take advantage of using the same language on both the front-end and the back-end (I don't shy away from complex front-end JS where it's needed), I should use an isomorphic framework, something like Next.js. But Next.js throws away web fundamentals like being able to submit a form the old way without JavaScript. And what about bundle size and performance on low-end devices? Yes, I'm supposed to hold my nose and tolerate mediocrity for the greater good, but should I even do that at the end-user's expense? And the better isomorphic JS frameworks (Remix, Marko, SvelteKit...) are too bleeding-edge. I don't know...

So, you're right, there's no one right way to "commoditize and focus". Sorry for the wall of text; this thread hit too close to home.


Domain expertise looks like understanding all the pitfalls holistically and knowing the right ways around them. In the case you describe (deciding on technologies), this type of decision often falls to CTOs because they need complete understanding of all pitfalls of an eng department, many of which are business, people, or financial issues, not tech.

Technical advantages needs to be weighed against scaling up technical velocity (hiring developers). This is why you you might want to forgo more niche technologies for mainstream. However, if you're never going to scale past 3-10 people or the technology really suites your business case well (ie. whatsapp built on erlang), you can break that rule. Thought leaders on both sides will advise for either case, but stage 3 will know their requirements very, very well, and how to choose the right thing that will avoid all pitfalls. Another way to put it; if you hit an unanticipated pitfall down the road later, you didn't really understand the domain.

Some decisions are literally millions of dollars or more for the business, and can only be made once - in this case, you definitely want a stage 3 person who sees all the pitfalls. If you decide on lisp for your language and you can't hire enough developers to scale up, you might be bleeding customers due to lack of engineering velocity, or large price tag acquisitions may fall through because they can't do anything with your code.

Other decisions are worth 0 dollars to the business, and many engineers spend too much time on them trying to do stage 3 decision making. They usually do this with an incomplete understanding of the pitfalls (most 0 dollar decisions don't have pitfalls by definition, just a bunch of "better or worse" arguments from thought leaders - react vs vue anyone?).


I think your description of the thought process and the eventual realisation that it is really hard to choose something that is pragmatic is spot-on.

FWIW, I would agree that most of the time you don't need an overly fancy solution. Some technologies are good for many (but not all) situations - such as relational databases. Others are useful in much more specific scenarios. That still leaves a lot of room for debate about which specific tools to use. It also leaves a lot of room for debates along the lines of "yes, I know that the whole 2000 people company uses bongoDB at scale, but in this particular case, it's actually not a good idea", something which unfortunately many people can probably relate to if they've worked at a bigger company.


There is some value in it, you can become the guy people call in when their needs go beyond the standard framework. In the right situation this sort of work can pay handsomely.


It's cathartic to read other people who have to go through this.

I'm fighting red tape for my team as we build out a dashboard.

Outlook is packed with 1–2 hour meetings for the next 3 months where so far I'm:

* being asked to load test our system to make sure it can handle the load (of 3 people?)

* being asked to integrate with various analytics platforms so we can alert some poor schmuck at 3 AM in case the API goes down then (it's not a vital part of any platform)

* told to have this run in k8s since everything runs in k8s

* other pedantic tasks by Sys Ops who think everything is a nail and love to argue their points ad exhaustium (or worse, argue why their fav stack is the golden child)

I understand the need for standards and making sure they're followed, but there really needs to be a human element of, "is this truly needed for what I'm trying to do?". So many engineering departments are all about automation, but don't truly think through how much automation is needed, rather than a 1 size fits all approach.

I appreciate that this article comes to the conclusion that the more correct an answer will be, the more complicated it tends to be. I wish more people in decision making positions would understand this.


The minor conclusion of this article was the more interesting (and perhaps more practical) of the two:

Hide concessions to various leaders in the project roadmap.

This isn’t just a “bureaucratic trick” as the OP suggested, it’s actually a way to convert unconditional advice into contingent advice, by encoding a priority.


> to convert unconditional advice into contingent advice, by encoding a priority

This is one of the most important things I've learned as a developer, and one that I thought I invented myself, before I knew about agile, by keeping a whiteboard near my desk with yellow sticky notes ordered by property:

"Yes, I get that it's a must-have feature, but where do you place it in relation to these other features?"

The concept of prioritization of features, and of saying "if I stopped dead at some arbitrary point in this list, would you have been happy with your order?" seemed so eye-opening to people at the time.


Sometimes, the features are really must-haves though. Let’s say it’s march 2020 and your boss wants you to design a mass-market covid vaccine. You have three requirements: it needs to be safe for human use, it needs to be effective at preventing covid, and it needs to be possible to manufacture. If any one of these is missing, your design is useless. I think a similar dynamic is visible in many software projects.


That's totally fine - but people need to also be aware that if something is really a must then they have to be willing to spend adequate time and resources on getting it done, instead of assuming whatever resources they have on hand will be sufficient.

It's amazing the things that stop being a "must have" as soon as they have to spend more money.


It's a never ending struggle to get people to create this ordered priority. I always tell my developers to say

"if you do not give this an ordered priority, I will resolve items as I see fit. Should we need to stop for one reason or another, there is no guarantee of which have been resolved".

Often times that is okay. I also tell them to always take the ones they're most uncertain about first. Better to front load hard problems and uncertainties.


A manager once brought up "there are three levers - scope, time (deadline), money (people)", and while it's probably not revolutionary, it did stick with me.

Add more "must have" scope, and something else has to give.


I would argue that the scope lever should be set to 60%, time 60%, and money 35% for software projects.

Software projects are kind of like ovens-- if something cooks perfectly at 300 (temperature units), using 25 minutes and using 5 (money units), that does not mean it will cook perfectly at 600 temperature units using 12.5 minutes and 10 money units. Most likely it will burn.


Even there, drug companies often go through the features in a particular order. You start with a range of formulations which you suspect will be safe for human use, you test them to see which are effective, and then you hand them off to a different set of chemists and chemical engineers whose job it is to figure out how to manufacture the doses at scale.

Every part is necessary, but that doesn't mean that there isn't an ordering. Finding something that's easy to manufacture is pretty useless if it turns out later that it kills the patient. On the other hand, a drug that's safe and effective, but is difficult to manufacture is still a viable drug; worst case, you do what drug companies do all the time and charge obscene prices per dose until you figure out how to scale the process.


Then you do what the drug companies did: you hire consultants to do it for you. I did architecture work for one of the major covid vaccines for most of 2020 and that’s exactly what they did.

The overall tone of the program was “we basically have infinite money, just get it done and the government will pay us back”. So they had a fucking army of consultants to accelerate a process that normally takes 5+ years down to 6 months and they were building down multiple roadmaps just in case they hit a block on one of them.


Yeah, that's not so much a "nifty bureaucracy hack" as a core skill to completing any project. It doesn't even have to be 20 unrelated people's feedback... it's my own priorities quite often that I mercilessly stuff on the backlog. YAGNI isn't just at the micro code level, it's a core project design skill. In fact I probably YAGNI my roadmap much harder than my code since I often have a good idea that I will in fact Need It at the microlevel after decades of experience and can save some time at that level, but at the project roadmap level anything you can trim is getting the product out generating value sooner.

(Obviously one can go too far, blah blah blah. But just as with code, we have a much larger problem in practice grabbing too much from the project feature buffet than too little.)


And then some to level priorities shift and you look at the backlog thinking "if only we have done x before that".

Usually when your unfinished prototype ends up in production. That's the danger of reporting progress to people who think you can go to space on a paper glider.

Probably half of more of start-ups end up failing like this as their quickly delivered prototype fails to capture the market due to not being actually better enough, or crumble under the initial success.

Good for investors and managers who bail out early enough, very bad for users.


> Yagni originally is an acronym that stands for "You Aren't Gonna Need It"


This is it. I do a lot of consulting work around this problem, and the roadmap is where the business and technology meet. It’s where you convert sprints into calendar boxes. It’s also the part most companies do poorly because nobody likes to spend money on good project/program managers (hint: hire product managers instead even though they’re ~25% more expensive because everything in 2021 is a product in some way).

When you do it this way, you can decide well ahead of time if you need to bring in a contractor to build a must-have feature your team won’t have bandwidth for. It flips the narrative and puts the responsibility on the business side (which usually controls the budget anyway).


This works especially well when you set and own those priorities, or if your management supports those priorities. Everyone who wants their feature will need to justify to you that their feature deserves a better placement on your roadmap.

It does not work if you can not defend your priorities.


> Create an extended product roadmap and put those items at least a year off into the future “and as long as they don’t seem relevant, you can just keep pushing them into the future.”

That actually seems to me like the root cause of all the calamity in the article, a culture of lying.


I don’t see it as lying in any meaningful way. Specifically in the article the problem was that there was technical feedback from many parties that have very little, if any stake in the matter. I’d be willing to bet that none of them even bothered to look at the product roadmap to check on the progress or status of their suggestions.

Rather, the “cause of all the calamity” in the article seems to be the fact that the business has a culture of requiring feedback from random individuals who have very little stake in the project or product delivery.


Upon further reflection, the root cause is poor communication, and the 'bureaucratic judo trick' is just a continuance, or perhaps even an escalation, of an organization's poor culture.


This is absolutely not lying and I'm disappointed that anyone thinks it is. This isn't "not doing the thing and saying you did", it's just setting its delivery date into the future, an entirely routine operation for every software project that actually ships.


From my perspective, the tactic misleads the stake holders about the real priorities. It's a deception and corrodes trust in the organization. The article even describes it as a 'bureaucratic judo trick.' It really seems to me as analogous to the micro-services guy or the architect guy insisting their way prevails.


I think we’re talking past each other. In my reading of the original article, the author needed to get consensus from various individuals in the organization who were by definition not stakeholders. They had little to no stake in the project or its goals and could therefore block the project with no personal risk.

I could be misreading the article, though.

I agree that it is detrimental to trust to lie about the project roadmap to stakeholders.


I’ll add to the list:

- Ceremonial unit tests for every little thing. The whole system is buggy as hell and we don’t have any confidence that the unit tests are truly covering critical parts of the app. But alas, test coverage, the god damn Pope that can never be bemoaned.

- I’m not making this one up: A/B testing for an internal enterprise app.


Test coverage is almost the perfect illustration of Goodhart’s law. Good programming practices do result in high test coverage, coverage is very easy to measure, but very easy to fake with useless “tests”. So, when the coverage is measured, the coverage goes up, but stops being meaningful.


> but very easy to fake with useless “tests”

While it doesn't alleviate the problems entirely, you can also run things like mutation tests that check that your unit tests actually test conditions, rather than just execute all the code.


High coverage isn't enough but, in my experience, it's a great place to start.

I've written an depressingly high quantity of code in my career that blows up literally the first time it runs. I'd much rather that happen in a unit test than in production.

Any test that exercises a given branch is better than nothing.


Coverage can tell you what you didn't test, but it can't tell you what you did test.

> Any test that exercises a given branch is better than nothing.

I disagree with this. If you have a test that doesn't actually test anything, you can't tell that you're not really testing that branch. No test is better than a bad test because it's easier to fix.


I have seen bad unit test being introduced when engineering management starts enforcing a threshold (80% coverage). Often developers will scramble to test trivial methods, such getter and setters, but will not write any suitable tests that actually cover the business logic. It is even worse when management only enforce a 80% coverage for new changes. In those scenarios developers go out of their way to encapsulate changes in a separate class to avoid having to test the original codebase in a meaningful way.


I think most people, including the managers are aware of the problem you've highlighted. What's the solution?


The solution is don't measure test coverage. Measure something you actually care about, like minutes of downtime.


Every time a bug is found I ask my team to write a unit test for it to prevent regression for that bug.

During peer review I encourage Engineers to verify that the actual business logic has been tested, for example calculations.

If done correctly, a low unit test coverage can actually be of more quality than enforcing an 80% threshold


Back when I was struggling to develop features in overengineered hell, I commented to my friends what a breath of fresh air updating a personal site with scp was.

They all gave sighs and shudders of disgust, but then again, they had normal programming jobs, so I suppose it seemed quite backwards to them.


Oh, but scp won’t update it atomically, so you should switch to a scheme that will. Then all you need to do is set cache policies correctly, coordinate with your CDN, and maybe do a staged rollout, just in case.

/s

Seriously though, rsync is your friend. :-)


> Seriously though, rsync is your friend. :-)

Even rsync might not be atomic enough for some situations[1] since it'll update files as it goes rather than in one huge transaction at the end.

[1] I worked on the World Cup 2006 site for Yahoo! and we had this issue - solved with 'rsync --link-dest' and swapping symlinks.


Write a script.

1. stop service 2. copy files 3. start service


Now you have two problems, because for high availability you need a failover or better yet, shadow secondary service.

Hot patching wins, but needs good design to work in the first place.


You just need to decide how appropriate that is for your situation.

As an industry I suspect we tend to over-engineer rather than under. There is a huge spectrum between my single person business with a brochure site and what Google or Apple needs. I'm willing to bet most programmers are working closer to the first than the second.


> 1. stop service 2. copy files 3. start service

That assumes you can stop the service which, for many things (like the World Cup website), isn't really possible.


You know, I think I might have switched to rsync at one point- I haven't had the site in a few years now, so my memory is a bit hazy.

It was sufficiently small enough (no heavy media files) that I didn't mind if I left some unused files up there. Pretty much the only thing that I had to do was make a copy of the sqlite database each time just in case.


--delete-after will delete files on the destination after everything else has synced so you can be sure you aren't linking to a non-existent asset.


The other side of the coin you are not telling is: let's ship this small project to production without all those useless bells and whistles, and then fast-forward 12 months, suddenly everybody is using it and it starts failing spectacularly, and now all those teams that complained in the beginning have a fire to extinguish. I've been too many times on this other side of the coin.


being asked to load test our system to make sure it can handle the load (of 3 people?)

The problematic load in a dashboard isn't users; it's querying the data sources to get up to date information. For example, if you're running a query to aggregate a bunch of things with lots of joins and that query takes 1.5s to run but your dashboard tries to run it every second so it can be 'real time' then you're in for a bad time even with just 1 user. You absolutely need to load test a dashboard application that's running against production data.

being asked to integrate with various analytics platforms so we can alert some poor schmuck at 3 AM in case the API goes down then (it's not a vital part of any platform)

It might not be vital right now, but if you make a dashboard for it then it'll quickly become vital. Putting metrics in front of people focuses them on those metrics...


I've downvoted your answer here as I think it is a ungenerous interpretation of the post you replied to.

It's just as likely that OP did already know that what you are insisting on is not relevant to their use case. That might be why they stated it.


Cathartic is certainly the word. The title in particular really hits the mark for me.

There are a lot of people talking about computer programs, and telling us we should do things this way or that way. Even telling us that their way is certainly the best or only correct way.

A great many of these people - perhaps the majority majority - are plain wrong. Some of them talk such nonsense that I suspect they don't have any actual ability to program at all!

How can they be so sure of themselves?


> * told to have this run in k8s since everything runs in k8s

I've seen a production system handling one request (which takes a handful of ms) every 2 seconds (work hours only, mind) in k8s running 8 pods. It is quite breathtaking.


How do they handle the load balancing with that much traffic?


I feel your pain. Been there, done that, probably still have the t-shirt.


I enjoyed that read. I suspect that it probably pissed off a few folks.

I'm a grizzled, scarred old codger that spent most of his career, saying "Are you sure that's a good idea?", only to be ignored, and then put in charge of mopping up the blood.

I have learned that "I told you so." is absolutely, 1000% worthless. It doesn't even feel good, saying it.

What I have learned, is that, when I see someone dancing along a cliff edge, I quietly start figuring out where the mops are kept. If that person has any authority at all, I'll never be able to stop them from their calisthenics.

One of my favorite quotes is one that pretty much describes "hedgehogs":

    "There's always an easy solution to every human problem; Neat, plausible and wrong."
There's another one, by the same chap (H. L. Mencken):

    "The fact that I have no remedy for all the sorrows of the world is no reason for my accepting yours. It simply supports the strong probability that yours is a fake."
Of course, the issue is that for every 10,000 appalling, messy, featured-on-rotten-dot-com failures, there's one spectacular success. Since humans are biased to think of successful outcomes as more likely than they actually are, the ingredients for that success become a "recipe," and are slavishly reproduced, without any critical thought, or flexibility.

It's like a witch doctor's formula for headache cure is bat urine, dandruff from the shrunken head of a fallen warrior chief, eye of newt, boiled alligator snot, and ground willow bark. The willow bark is what did it, but the dandruff thing is the most eye-catching ingredient, so it gets the credit, and everytime the chief gets a hangover, they start a war.

Somewhere down the road, a copycat substitutes hemlock for the willow bark, and headaches become a death sentence.


I’ve found that management and decision making is much more of a social thing than anything else. Which is probably why your “I told you so”’s feel so worthless. I don’t think think quietly making people crash into a wall is the best way to handle it either, but having worked in the same political organisation for a decade I can certainly see why it’s easier to end up in that category.

I prefer to drive into the wall with people instead, working at it together, when that’s what is going to happen despite any concerns I have. Usually when you end up being right, people will listen to you more the next time if you’ve stood there with them.

It also helps a lot when your prediction turns out to be wrong. When RPA became a big thing in the Danish public sector a few years back I was one of the stronger voices against it in most of our national ERFA networks. When we got the clear message from the top that we were going to do this, however, I jumped right in and helped us chose and build what is now the leading RPA setup in any Danish municipality aside from Copenhagen. I still think RPA is really terrible from a technical perspective, but I can also see the merit in how it’s currently saved us around 90 years worth or manual case-work at the price of a few months of developer- and support-time in total. Because I was quick to jump aboard what I still thought was going to be a sinking ship when it was going to sail no matter what I did or thought, people don’t hold how wrong I was against me but instead lovingly tease me or sometimes cheer me up with other times where I’ve been right.

You have to want to do this of course. If your workplace doesn’t have the sort of people you’ll want to drive into a wall with, the your way is probably better than mine.


> When we got the clear message from the top that we were going to do this, however, I jumped right in and helped us chose and build what is now the leading RPA setup in any Danish municipality aside from Copenhagen.

That sounds almost exactly like traditional Japanese consensus.

Everyone argues for their opinion during the planning meeting, but once The Big Boss does the "chopping motion" with his (it's always a "he") hand, then everyone is expected to fall in line, and commit to the team effort.

They actually despise "I told you so." It's not smart to do that, in a Japanese corporation.


Things may be a lot better if there's an official mechanism in an organization for delivering the "I told you so" message, because as hurtful as it might be there are a lot to learn from those kind of people.

In the Japanese corporation, japanese culture seems to drive old experienced people's pride high. It's not a wrong or a right thing. It's just the social mechanism, and once we know that we might be able to leverage from it.

In the end, the game in every organization is not just about being right or doing the right thing, it's also about power, authority, influence.

Back to the great-grand-parent comment, I assume, being a grizzled, scarred old codger, along the way you may have found a method to identify people that thinks like you. If you have, I would appreciate it much if you share about it here!


All dictators despise being hold accountable.


I see myself in the same role in my organization, except that I think in terms of a different bodily (semi) fluid.

I even have had crises of confidence, thinking that "I told you syndrome" is a psychological issue with me. I do tend to be overcautious, and tend to underachieve because of it.

But I get a grim satisfaction in knowing that when the next time the bodily fluid hits the air circulator, my pail and mop will make it liveable again.

There was some dedication which I thought came from a John Le Carre novel, "For those who served and stayed silent". I can't find the source now, but that's my spirit.

(I am not in the IT area, I am in academia.)


"I told you so syndrome" may be a normal thing. I once read a psychological paper talking about "one's willingness to make personal sacrifice for making other people feel remorse and learn when they make mistake" or something along that line (Shame I can't find the link nor remember the exact title)

"I told you" is a social sacrifice I suppose.


I'd rather be the guy that sells the mops. These people won't respect you until you take their money.


I believe those are called "Security Consultants."


Sometimes people will insist that sticking beans in their nose is absolutely necessary.

https://archive.uie.com/brainsparks/2011/07/08/beans-and-nos...


I love that! Thanks!


> I have learned that "I told you so." is absolutely, 1000% worthless.

This is proof of bad company culture.

Doing post-mortems is important to learn what went wrong in the decision process and how to prevent it.


Different thing. "I told you so" is a smug, nasty statement. It wins no friends, and closes the ears of those most in need of it.

A postmortem is a clinical, reasoned, and scientific review. Everyone is on board, and agrees to abide by the results.

I worked for a Japanese company, for a long time. I made some colossal mistakes, during my tenure, and was told "That was, indeed, a mistake. We expect you to mitigate it, and not repeat it." Often, I would actually get more trust and responsibility, afterwards.


> Different thing. "I told you so" is a smug, nasty statement

I agree. The fact that the parent feels like his warning were ignored and his voice was unheard is a cause for frustration.

Having a postmortem process after each incident would prevent such frustration.

> A postmortem is a clinical, reasoned, and scientific review.

I'm well aware of this.


Yup, I tell my teammates, fix the error, not the blame. I refuse to find who did wrong. After an initial hesitation, suddenly everybody jumps to it and solves the issue.


Sometimes it's necessary to find who did something to find out what happened (maybe), but if the culture is good that's not a problem.

I've had no issues admitting mistakes at my places of work, and vice versa: "This is what happened/I did, which led to this. We fix it this way. What can we do to prevent it/similar things happening in the future?".

Edit:

People do most things with good intentions and for hopefully good reasons. When things go wrong it's usually down to unforeseen interactions or second order effects (or brain farts), and/or a lacking review process.

The person making "the mistake" is usually just the last person crossing a faulty bridge, and if it wasn't them it would've been the next person. It's not a problem identifying the person if everyone realizes we all want a sturdy bridge.


What you say makes sense in a good atmosphere. I work in a very toxic culture. Once people start blaming someone, their own blood boils with their rhetoric, and then it becomes a bitter fight.

I see it as etiology vs. teleology - rather than thinking "how did we get here?" we think "how do we get out of here?" The two are interrelated, but the second somehow gets things moving, and reduces resentment in toxic workplaces.


How do you, or how should anyone in a similar situation learn that actions have consequences?

I work for a large (not FAANG but spatially close) ecommerce company, and I’ve yet to see substantial changes or learnings after outages or mistakes.

I often find that wishy-washy post-mortems smear responsibility and deflect accountability. This doesn’t incentivize a change in behavior, and when people blindly get more trust, they often seem to simply repeat mistakes.

I think I’ve yet to master post-mortems and transforming “I told you so”s into improvements - I’d appreciate tips and ideas, thanks!


If you ingrain a culture of "actions have consequences" you disincentivize actions. This is why everyone is fine with letting nowhere projects run ahead, and adding to technical debt, or just keep on incurring running costs instead of cutting them.

"If I don't make a choice, it's not my fault".

It's much better to establish a culture of "try doing good things, and we're in this together". If you trust your colleagues and are good at recruiting, you'll get a lot more done.

I don't know if I have any specific tips and ideas for post-mortems as it usually... "depends".


If you work for an eCommerce company, then invest in mops. Hackers love eCommerce companies that fail to learn from mistakes.

Someone mentioned "bad corporate culture." In fact, the OP was really a sort of indictment of dysfunctional culture.

The Japanese are heavily process-oriented (not always a good thing). They have a consensus-based approach, where all stakeholders agree to a common set of rules and remedies before the meeting begins. If a problem is found, then the meeting doesn't end until there is a plan (and responsible person) for a remedy. Assigning tasks (not blame) is a goal. People are assumed to accept personal responsibility for their own mistakes. It's not the job of the meeting to do that (there's a common cultural mythology of responsible managers committing suicide, if they screw up badly enough. I never saw it, so I can't speak to its accuracy).


> I work for a large (not FAANG but spatially close) ecommerce company, and I’ve yet to see substantial changes or learnings after outages or mistakes.

I worked for Amazon and postmortems were taken seriously and done regularly - but things can be different in other teams.

If people warned about a risk in the past this would be noticed.

If nobody flagged the risk people would start asking why.


My leadership is constantly pushing for “hedgehog” style advice to be depersonalized, encoded in policies, and handed over to bureaucrats or automation to enforce.

Trying to empathize with their position, I think they think failure happens because the right hedgehogs didn’t show up to the design review that day, or forgot to harp on whatever point that time. They are never satisfied with the “it depends, there’s no hard and fast rule, you have to let the experts think about it in context” responses I give them when pressed for policy. This is limiting my advancement. But worse, someday someone will join the team and will write those hedgehog policies, and then I’ll have to live under them too.

Software engineering is a thinking person’s game. I get that management wishes it weren’t, but it is.


Leaders rarely want the nuance. 99% the answer is always "it depends" or you have to ask follow-up scoping questions. They just want to be handed a decision without the color or limitations. Sometimes that leads to future uncomfortable conversations where they assumed they would get Capability X but you only gave a "yes" or "no" because that's the preferred level of detail, leaving the rest to assumption.


You are so right and I'm so sorry. It is truly miserable at so many companies for exactly what you wrote here:

"Software engineering is a thinking person’s game. I get that management wishes it weren’t, but it is."


Author here, thanks for sharing this. Let me know what you think. I'm trying to connect the dots on research on expert advice and our fields 'thought-leaders'.

The connection is a bit tenuous but I think contingent advice can be shown to be better than non-contingent advice. I also think people are too confident in their opinions.

Also another submission here: https://news.ycombinator.com/item?id=27462255


Great read.

One thing I've found (as a person who advises engineering managers and startups) is that recipients of advice seem to value non-contingent advice more. They just want simple answers that don't make them think.

When someone asks me a question like "how should I interview candidates?", my default answer is "it depends". Tell me about the role. The company. The culture. The product. Remote or in-person? What's the team like. Then i can give a framework that gives you the answer. But people want answers like "use take-homes" or "do 2 behavioral interviews and 1 coding interview".

Same for technical decisions. They don't want to hear "it depends". They want to hear "use Rails and MySQL hosted on Heroku".

So I naturally find myself being pushed to give non-contingent advice.


I personally find that I want opinionated advice in two scenarios: either all the options look indistinguishably similar or they are very different sets of trade-offs that leave no clear winner. At that point, the advice is more a tool for breaking decision paralysis than a choice between options with noticeably different outcomes.


Good point. I find outlining the two (or more) possible scenarios as explicit options and asking "I'm on the fence, which would you do?" is the best way to get that type of opinionated advice from "the foxes". Otherwise you quickly get the "it depends" answer.


Yeah, I think that could be true in general. Tetlock found the hedgehogs were more famous and more wrong and I got the idea from him that that was because simple advice is more sticky and works better in sounds bites but it could also work the opposite way -- ie. the more people ask you for advice the more you learn to tell them what they want which might be oversimplified. So you get trained to be more of a hedgehog over time. Interesting idea.


So consider the position the people asking the questions are in. They're facing a problem; they need it solved.

All of us, when we're in that position, desire a solution. I'm not sure what differentiates those who want to fully understand the whole solution space, and all the context that dictates -why- a particular solution may be the 'best' (given a specific set of tradeoffs), but certainly, whether we are like that or not, we all desire the right solution ASAP.

I'd be super interested in how you respond to those who ask such questions; do they seem interested in explaining their problem in detail? If you, rather than say "it depends", instead immediately launch into questions, are they engaged in answering them? Can you then finish with a "given what you describe, because X, Y, and Z, I think (solution) would be the best fit for you. It has the downsides of A, B, and C, but those don't apply to you", or whatever. I.e., basically change the tone to always be focusing on solving their problem, while also allowing you to inform them, rather than "it depends" which could imply "there isn't a clear-cut solution to your problem".


Do managers who accept contingent advice from you tend to produce better outcomes?


This is a good question. As much as I find them hard to stand, there are cases where having an ideologue in the back of your mind (or one on each shoulder) makes it easier to play out a situation in one’s mind.

In other words: I don’t need a rule per se, just an argument. I’ll work out the contingencies on my own.


And in software engineering, the answer is almost always "it depends".


This is pure gold for fresher developers, and something of which more experienced devs could use a reminder.

Every fad and every champion of every technique or framework has something to teach you, and they are often very happy to teach it to you at the wrong time. Trying to please everyone at the start of the project is tantamount to design by committee, and is a sure way to kill a project.

To a hammer, everything looks like nails.

Well written.


Thanks for reading it. It was one of the those ideas bouncing around in the back of my head for a while but hard to put into words then I read something about Tetlock and the dots sort of connected for me.

Software advice isn't totally a prediction, but it sort of is.


I think this is generally good advice for software engineering, accept when its not. The problem is that some bad ideas become better ideas by virtue of being popular ideas. Write a shitty framework/language/technology and you have nothing, convince a million people to use it and it becomes compelling because it has a lot of users working with it and solving problems.

Its the classic stone soup story[1]. You see this especially with software and tools that focus on front load new users making it really easy to do trivial things but failing catastrophically when you need more.

You also see the reverse of this, great ideas that don't get bye-in failing by virtue of being too niche.

1. https://en.wikipedia.org/wiki/Stone_Soup


There is a very similar problem in advice-giving for technical questions, the problem of "Why do you want to do that? You should do this instead." I've seen others recommend trying to ask binary yes/no questions ("I think it's like this. Yes/no?") or to turn an open-ended question, when asked, into a set of binaries rather than guess at the intent.

The property that seems to be common in addressing both is benchmark-setting. The advice of "kick the can down the road" for less productive advice is premised on knowing that it doesn't fit your success benchmarks, but not wanting the confrontation(since a hedgehog benchmark is going to boil down to a single-issue attachment). Likewise, a battery of narrow binary questions that have a definite pass/fail characteristic constructs a form of fox knowledge - it's pragmatic in how it describes the "potential shape" of the outcome, so it makes for a better holistic benchmark than asking "what's the best way to do this?"


> the problem of "Why do you want to do that? You should do this instead

IIR, there's a word or idiom that describes this kind of solution, I can't think of it and now it's going to bother me until I do. It's a stackoverflow issue, someone asks "How do I do X?" Someone will counter, "Why do you want to do X?" and upon receiving additional information, answer, "You don't want to do X, or this other thing you're doing before doing X. You want to start this way and go down this path and that way you don't have to do X." Maddening!


I remember now! It is called the XY problem.

https://meta.stackexchange.com/questions/66377/what-is-the-x...


A side point: the hedgehog/fox analogy originated with philosopher Isaiah Berlin.


Technically, I suppose, it was Archilochus: "The fox knows many things; the hedgehog one great thing."

Tetlock seems to have a slightly different interpretation to Berlin - (paraphrased from [1]) "hedgehogs have one grand theory; foxes are skeptical about grand theories".

[1] https://longnow.org/seminars/02007/jan/26/why-foxes-are-bett...


I was immediately reminded of the fable of the Fox and the Cat (in which the Cat wins because following its one heuristic takes no time while the Fox has to deliberate to find the best of its many smart solutions – and the dogs are coming fast). Apparently the tales are related https://en.wikipedia.org/wiki/The_Fox_and_the_Cat_(fable)#Th...

(while Aesop's https://aesopsfables.org/F89_The-Fox-and-the-Hedgehog.html is completely unrelated, though also interesting!)


One thing to note is that Berlin does not consider the fox as all around better than the hedgehog. Usually you'll see that people have a preference for the fox but Berlin considers some great people as hedgehogs such as Plato, Nietzsche, and Dostoevsky. He also stresses the fact that it's merely a metaphor and shouldn't be applied strictly.

The book is absolutely worth a read if you're into the subject.


You captured this well. I genuinely thought this was just something that happened in my company, which has been around for a bit. ;-) I accidentally stumbled upon your technique independently!

A) I’d love to have a coffee with you. Virtual or otherwise!

B) What do you think about alignment of priorities -within- a team? I’ve seen some interesting behaviors and misbehaviors in a team, where initiatives that are both trivial and non trivial die a death of a thousand cuts because of various and sundry plausible reasons. If I peel back the onion on it, it seems like those situations are ones that arise because of a fundamental lack of trust. Would you challenge or support that premise? If supported would you consider external stakeholders’ objections to stem from the same root lack of trust? It seems like we get more “hedgehog” like behavior when we don’t trust each other, and more “fox-like” behavior when there’s better trust and communication.


Great article Adam, fyi the LinkedIn link on your website is broken. Was trying to look up your history.

Also Ctrl F: "beleive" -> "believe"


Thanks - I'll fix that. There aren't too many Adam Gordon Bell's out there though.


This a good framework for identifying a particular type of bad, or useless, feedback. I've added "non-contingent" to my vocabulary.

For example, another type of useless feedback is so general as to be insulting; "this needs to scale" or "it needs to be high quality".

"too general" vs "non-contingent" are nice distinct buckets.


Great read. Also, it inspired someone to post this super interesting comment:

https://news.ycombinator.com/item?id=27468654

I'd be interested to hear your thoughts on that take since I thought it was very insightful.


Get article. I really appreciate how you were able to incorporate Tetlock's findings.

I've been surprised by how reluctant sales and marketing people are to Brier Scores when it comes to their forecasting, given their interest in delivery estimates from engineering.


Nice article, very relatable to my experiences.

Having been on both sides of these types of discussions, I have a few thoughts:

Advice isn't always unconditionally uncontingent. An infra person saying that something should probably be done in some overly specific preachy "best practice" way is sometimes thinking of things that a product person may not. For example, maybe the data guy told you to use WebScaleDB because scaaale, and you chose to use a simple YourSQL thing instead. But it turns out that in the next semester, a metal team you had never heard of is working on chaos testing and they're making sure WebScaleDB handles datacenter failovers properly (but they don't know about your snowflake YourSQL instance silently chugging along in a forgotten corner of one DC). This sort of stuff can be very tricky to anticipate, especially in large companies with siloed teams. I've found it useful to fully embrace the idea of leveraging technical debt: yes maybe YourSQL won't scaaale and maybe it'll die horribly and without explanation when failovers start happening, but if it can carry us to the next point in the evolution cycle, then we can reevaluate our options then, instead of being trapped in analysis paralysis and getting nothing done for the entire duration of time.

As a person giving advice, I feel that I fall in the contingent camp (looking at specifics before giving suggestions), but over the years, I've started to try to be mindful of cognitive overload: saying "it depends because X, Y, Z" often goes over people's heads especially when they're already trying to soak up advice from a million different directions. Sometimes, it's better to just take a stance and spit out the TL;DR. If the stance happens to align with "best practices", you can just point at them and people are usually satisfied; if it doesn't align, you can often sway people to understand that there is nuance with a clever enough soundbite: "no, actually you don't want to enforce 100% coverage, full coverage tells you nothing about test quality, uncovered code is what tells you what you're lacking" (or "you don't need WebScaleDB; a billion db rows can be binary-searched in 10 comparisons"). Even if your dumbed down advice now lacks nuance, there's always the opportunity to course-correct as the team builds more experience on top of that advice.

Sometimes, you have to be the thought leader and drive the change you want. At my company, for the longest time, every team was suffering the pains of Jenkins. You can't do X because otherwise Jenkins will not be able to handle it, they'd say. We've invested a lot in Jenkins, they'd say. A scaling solution is coming soon, they'd say. My team couldn't wait anymore and we took the initiative to bring in an off-the-shelf 3rd party solution that had all of the pain points figured out (and then some). This turned out to be a really good call because just a week after we deployed the new solution, our Jenkins cluster - shadowing at this point - completely gave out due to scale limits. This third party solution is now what other teams in the company are adopting - including teams that were investing in jenkins integrations before.


You have to tell us what you replaced Jenkins with!


Buildkite


If you are in a meeting where "thought leaders" are debating idea vs idea – STOP. Don't engage. You will be pulled in, some words or false statements will be very tempting to prove or disprove.

Sometimes you may even be asked to take a decision or commitment on the spot to just an "idea". STOP right there and don't fall into their trap. They just want their "idea" to win, and then they'll disappear in the execution, leaving you holding the bag. Worse still, in case the idea was flawed, they'll refuse to admit. They'll come back and reinforce the idea, not allowing you to pivot or learn from mistakes. That's the nature of thought leadership – the "thought" matters more than everything else.

All ideas are open and welcome, but you don't take commitments based on just ideas. Ask them to show a spec or concrete doc, and start discussing spec vs spec, detail vs detail, plan vs plan, data vs data or anything concrete. You'll find many of these thought leaders silently disappear into the background then.

They will come back and try to abstract-ify the discussion again before decisions are taken. That's why you set ground rules before the meeting begins, and not when it's happening.

Thought leaders are all nice and fancy, until rubber hits the ground. 100% agree with just this title alone: Don't feed them.


Found this to be a straightforward and interesting article. Not sure I've ever seen someone connect Tetlock's research to engineering planning before; I certainly hadn't made the connection myself. I also appreciated the tip that instead of saying no to something, you can just add it to a "future work" section of lower-priority but definitely-still-very-important-i-promise-really-i-mean-it tasks and everyone will end up happy.


From How to Talk So Kids Will Listen & Listen So Kids Will Talk (1980):

> My husband and I took Jason and his older sister, Leslie, to the Museum of Natural History. We really enjoyed it, and the kids were just great. Only on the way out we had to pass a gift shop. Jason, our four-year-old, went wild over the souvenirs. Most of the stuff was overpriced, but we finally bought him a little set of rocks. Then he started whining for a model dinosaur. I tried to explain that we had already spent more than we should have. His father told him to quit his complaining and that he should be happy for what we did buy him. Jason began to cry. My husband told him to cut it out, and that he was acting like a baby. Jason threw himself on the floor and cried louder.

> Everyone was looking at us. I was so embarrassed that I wanted the floor to open up. Then—I don’t know how the idea came to me—I pulled a pencil and paper out of my bag and started writing. Jason asked what I was doing. I said, “I’m writing that Jason wishes he had a dinosaur.” He stared at me and said, “And a prism, too.” I wrote, “A prism, too.”

> Then he did something that bowled me over. He ran over to his sister, who was watching the whole scene, and said, “Leslie, tell Mommy what you want. She’ll write it down for you, too.” And would you believe it, that ended it. He went home very peacefully.

> I’ve used the idea many times since. Whenever I’m in a toy store with Jason and he runs around pointing to everything he wants, I take out a pencil and a scrap of paper and write it all down on his “wish list.” That seems to satisfy him. And it doesn’t mean I have to buy any of the things for him—unless maybe it’s a special occasion. I guess what Jason likes about his “wish list” is that it shows that I not only know what he wants but that I care enough to put it in writing.


I do this to myself. Every time I want to buy something I stick it on my Amazon wish list. Then I forget about it. I almost never actually buy things on the wish list.


I have now added this book to my Amazon wish list.


That's a great idea. I've unwittingly used similar techniques in the past. Now that I know it's a "real thing," I may have to start using it more. Thanks for sharing this!


That book is full of such incredibly pragmatic tools for dealing with children ina respectful way without giving up on your own values.


Same, except now it's taking photos of the things kids want, instead of writing it down.

(If it's customers, I just write it down)


This is just brilliant!


The book is a must-read. I keep multiple copies to give away to anyone receptive. It has aged very well.


+1 on this. Useful for improving communication with anyone, not just children


I've done this trick fairly often. Like many "tricks" in people management, it works until people figure it out, so you have to be a little careful with it. Nobody likes to feel manipulated.

However, I've found that being really open and collaborative with people helps mitigate the manipulation factor by a significant margin. In other words, you get them to agree that the project is not the highest priority or the highest ROI thing to be working on. You ask: "Given the list of W, X, Y, and Z, and keeping in mind that we only have enough resources to tackle two of these at a time, do you think X is the most important?" and they say "Well, X would be cool but yeah, W and Z would give us the most ROI, so let's hold off on X and Y until we have more time and resources."

The key is to be (or appear) really genuine with this. If it's obvious that you're kicking the can down the road because you don't want to do it, you won't win any friends or influence people. But if you can approach it with "I'd love to do X but the realities of our situation mean that we can't" in an authentic way, then you stand a much greater chance of having both sides walk away with a sense of accomplishment. They feel heard and valued, and you don't have to waste resources on something you don't think is a good idea.

If you can't be authentic about that, then I would just go the truthful route of "This isn't going to happen" and try and just be honest about the realities of the situation. They might feel hurt and rejected, but it's better than them feeling manipulated, IMO.


I take it one further. "Okay, let's sit down together and write out a story that fully defines what you're asking me to do".

Half the time they won't bother. -Your- effort is free, but -their- effort has a cost.

The other half of the time they will, because they care about it, and so it goes into the backlog, and they get to see what stuff takes precedence (and it's a legitimately good faith effort on my part to see it ranked appropriately, and that they feel informed as to what is coming ahead of it and why).


Aka "The Wally Reflector". https://dilbert.com/strip/2005-07-10

Even a small task is usually enough to filter out requests where the requestor is basically trying to move a task from their list to your list..


Careful about saying yes early on, though. Even if they disappear, that won't stop them from telling everyone else that it's now your job.


Nothing stops them from telling others that regardless of what you say. You just make sure that all communication is properly caveated so you have something to point to should it all hit the fan.


Absolutely, I love this technique when the ask is clearly more complicated than they realize. I say let's schedule an hour to write out a spec, my calendar is open. Similar technique is to say yes I can do that but let's do it together on a screen share call and you can guide me through it. It's always a positive outcome -- either they withdraw the request or we collaborate positively together to make it happen and appropriately prioritize.


In a sane situation, you usually have a "product manager" or someone who owns the feature set and the priorities. (If you don't, if the engineers have to answer to multiple people with conflicting desires and demands, that's the first problem.)

But if you have a product manager (and they're doing their job), then all you have to do is tell them the truth. Let them figure out which features are priority, or will lead to the most revenue, or whatever. That's their job.

I was in an Extreme Programming estimating session one time. A particular story came up for our consideration, and several people groaned. Nobody wanted us to do the story, because it was going to be a bear to implement. I said "Just tell them the truth. They'll figure out why this is a bad idea." We estimated six months, and they decided that they didn't want the feature at that price.


> Like many "tricks" in people management, it works until people figure it out [...]

"Good strategy works even when you know it's coming" - something like this from "Sanctuary for all" :) One example of that was mentioned here few times: features need money. And resources and time.

But sometimes features can be crammed into project without bigger investment - just talk to devs and and often they will find a way. Sometimes it works perfectly, when overall architecture is good or extendable. And often it make total mess in codebase. But cost nothing ! ;)


Thanks for reading!

Yeah, the future trick is kind of interesting because it solves the immediate problem and it allows people to feel like you heard their concerns and valued their advice. If that is what they are looking for then its a great solution. If they have spotted legit problems then you need to actually reassess things.

I guess like everything it is very contingent on the environment. It worked in this specific context.


Then you take a vacation, and find that one or more of the shadow banned feature requests have actually been started...


I simply call hedgehogs ideologues. These people have a lens they see the whole world through and that tints everything a single color. It is a mental short-cut.

Some people like to call these mental models or lens, and that you should add as many as possible—switch out the green lens for a red lens and see if that makes things look better. And I agree, but I think if you have to consciously make “mental models” you are probably going to struggle to think critically about what the problems are anyway.

The truth is we probably all are a hedgehog at various times without realizing it. The only solution is to be as widely read as possible so that you do not short-cut to a few ideas that may or may not fit the challenge you are trying to solve.


This is an awesome article. Related phenomenon: platformization of a solution before even the first instance of the problem is properly solved. There are a lot of incentives for this in large tech companies for both engineers and managers to show broader impact, but in practice it often results in ill-fitting solutions that impose a tax on everyone else. A good rule of thumb I'd propose would be to require solving a problem three ways before anyone can propose a platform, then extract the best of each approach to build your platform out of and have the three owning teams all sign off that it is the right solution for their specific problem. Of course it's easy to argue against this in the name of redundant effort, but I think it's a small price to pay to avoid over-ambitious but under-qualified architecture astronauts imposing half-baked mandatory indirection whenever they can corner a gullible director.


Makes sense, and a good extension of the Rule of Three: https://en.wikipedia.org/wiki/Rule_of_three_(computer_progra...


The struggle here is that in software, sometimes hedgehog thinking is correct. The discernment challenge isn't in spotting hedgehog thinking and choosing fox thinking instead. The challenge is in figuring out in what domains hedgehogs are right and in which domains foxes are right. Unlike in the arena of politics, in software hedgehogs reap the benefits of deep expertise.

When are hedgehogs right? In the "obvious" stuff:

- You should always use correct indentation in your code

- Document your architecture decisions. Have conversations in your team to get feedback and buy in.

- When practical, try and keep down the number of languages and databases you use. It'll make onboarding easier and it allow the team to build deeper expertise

And so on.

The "relational database advocate" usually isn't making an argument that relational databases are better all the time. They're making an argument that relational databases are the right default, and this particular use case isn't weird enough to justify the cost of learning and deploying something else. They might be right - its extremely difficult to know without taking into account the task at hand and the skills of the team.

My personal take is to think about the values of a piece of software before writing it. How much should your project care about maintainability, or scalability, or accessibility (ie onboarding)? Then make your technology choices with that in mind. If actually, short term team velocity is the most important thing, skip automated UI testing. If your team's ability to hire developers in 5 years is really important, use React and not the javascript framework of the week.

Bryan Cantrill gave an excellent talk on this a few years ago, talking through the values different and systems encapsulate. Its an excellent talk:

https://www.youtube.com/watch?v=2wZ1pCpJUIM


1996: OOP all the code! 2000: XML all the data! 2004: AJAX all the websites! 2008: JQuery all the browser code! 2012: Nodejs all the website backends! 2016: TypeScript all the JavaScript! 2018: Rust all the system code!


OMG it's in four year cycles. We have six months to sell our startups and get out.


You can see it coming, so you can pivot your startup to be buzzword compliant with the new thing instead of the old thing, or you can make your startup into a consultancy to help people do the old thing that they now feel very behind on.


I remember visiting a potential client who ranted about their existing solution before we started talking about replacing it.

I didn't tell them I'd implemented the existing system whilst working for their previous supplier.


US presidential elections were about technology fads all along!


The first half of that list were just the best options available at the time (in any practical sense). After that you start listing things that aren't nearly universal.


Functional programmers would like to have a word with you...


They're a loud bunch, but there ain't that many of them :p


2016 --> 2018?

Shouldn't that have been 2020, or did you switch to mid-terms there?

If we put "Rust all the system code!" at 2020, we have four years to think about what the next one is going to be.


What's up for 2022? k8s all backends?


I am going to disagree with the author here, while the end result might have been successful for the author, the though process to get there was flawed. Ironically, the concept he talks about is right on the money, but with incorrect conclusions. In the example he brings up, that people with subject matter expertise predict events with more accuracy (Foxes) than those who make confident forecasters (Hedgehogs).

The thing is that the questions/predictions are all around specific/measurable outcomes (I believe that $X will happen if $Y, I believe that $X will happen due to $Z). Asking someone "What do you think of Iraq?" will yield significantly different answers than "What predictions do you have with regards to Iraq over the next 5 years?".

I have noticed one common thing, which will cause scope creep in projects with almost 100% certainty: The shitty question.

In my mind a shitty question when building software can be a number of things:

- Something outside someone's subject matter expertise

- Open ended

- Without timeframes

- No context

I would argue that the problem is not that people are getting feedback from Hedgehogs/confident forecasters and that they should discount/ignore their advice. The problem is people keep asking shitty questions, or questions outside the Foxes scope of expertise. I think that product/engineers actually need to be asking more questions of people with experience, not less, but they need to be good questions. This is a skill that requires more effort than most people think.

Sure, there are Hedgehogs/people who blab on about the newest tech, but not having a feedback loop is how you get disconnects between your users and the product. I have seen this play out in so many different ways and its amazing how quickly a product team can become disconnected from reality, even in a small company.


> one common thing, which will cause scope creep in projects with almost 100% certainty

Isn't this a very confident broad stroke forecast of the very type TFA rails against?


> Preventing the raising of objections was called ‘reaching alignment.’

Sounds like they had the wrong idea of what alignment means. It should mean making sure everyone knows what problem is being solved, and focuses all feedback and concerns solely towards whether or not the project is on track to solve it. See the quote later in the article:

> The problem with all the bad advice was that it was unrelated to the problem we were trying to solve.

Yup, there's your problem. There was no true alignment on a goal.

Their solution to punt all objections downhill works, and in many places it might be the easiest answer. But the better answers is to wrestle objections as soon as they arise, focusing on whether or not they are relevant to the problem at hand. It is harder, but has a better result on all counts.


Hmm, maybe. If someone is making a “universal” suggestion though, I think it would be hard to “wrestle objections”. These individuals probably know all the reasons why you should do X (since that’s their go to advice). Trying to get them to drop their suggestion could possibly backfire.

So when you get a suggestion from such an individual, it seems like an attractive option just to humor them and go build what you wanted to build.


When I started programming professionally I relatively quickly became something worse than a hedgehog. I was so excited about opinionated thought leaders, there was so much to learn and there were all these experts leading the way. But I was basically an overconfident parrot who didn't know what he was talking about.

Then I got experience and deeper insight so I became a hedgehog, because that seemed the thing to aspire to. However not for long, because I started to suffer from the lack of nuance that hedgehogs sometimes have.

Now I'm a confused wannabe fox. I want to be a fox, or rather I cannot be a hedgehog or parrot, but I'm in a continuous state of confusion and doubt and have an impatient urge to know more about everything. When it is required of me I can be pragmatic and clear, but those are snapshots given the circumstance.

One could also say I'm a curious and critical thinker. But that would be an euphemism. I wish I could be a hedgehog and act on it, while having a peace of mind.


This is a bang on article and I would say that more people should be less confident in general.

I also love his technology substitutions:

springy search = Elastic

Stoplang = Go

IronOre = Rust

BeetleDB = CockroachDB


YourSQL = MySQL. It's so simple but it made me chuckle for a sold five seconds.


Amusingly, the "My" in MySQL is a person's name.


Probably the sister of Maria or something; presumably named for Little My: https://en.wikipedia.org/wiki/Little_My . (Most people named that in Finland are, directly or indirectly.)


Great technologies, but not very well known. I highly recommend Casey Muratori's videos if you want to learn more. (Big Zinc Oxide advocate.) https://www.youtube.com/watch?v=1HAXgM3mjSo


Yes, I got turned on to IronOre and StopLang from Casey in exactly this video. Great tech!


You forgot bongoDB


Good decoding

> That was our number one secret to scaling when I was at warble

warble = google ?


I would think "Twitter" is a closer synonym.


I would guess twitter


I think I agree with the main thrust of the article but I view it from a different angle.

>they are always talking about a thing, that is the most critical thing in every case.

Yes, the quality person will always advocate for more tests. The safety person will always prioritize safety. The person managing the schedule will always prioritize deadlines. The cost manager will always prioritize the budget. It’s intent to human biases.

However, unlike the author I think “reaching alignment” is actually pretty important. But I don’t think alignment is about “the thing” central to the domain experts focus, but rather about reaching alignment about acceptable risk.

“What’s really at risk if we don’t meet the standard for test coverage?”

“What’s really at a safety risk if we don’t implement that testing strategy?”

“What’s the risk to the schedule if we miss this deadline because we implemented that extra safety?”

“What’s the risk to the budget if we miss schedule?”

In each of these, if you can reach alignment on acceptable risks it does a lot for the effort. It doesn’t mean any one “thing” has to be a priority in every instance but rather put in the context of overall risk profile. Conversely, if you avoid reaching alignment I’ve worked on teams that will actively subvert the effort because they don’t believe you agree with their “thing” as a priority in any case.

Standards aren’t written in stone but are there to partly guard you against biases. Are you not meeting them because the risk changed to be more acceptable? Or are you just v rationalizing some unconscious bias? Explicitly defining risk helps here.

I will say for that to work you need to have people who are willing to openly accept risk. I’ve also worked on teams where that wasn’t the case and alignment could never be reached because nobody wanted to be on the record accepting risk because if they didn’t make a decision they still had some plausible deniability.


The "we saw this problem at X place" always brothers me.

I'm old, I could come up with endless such stories that would require epic solutions, but in not sure there would be much gain. Tell me why we will hit that issue and why we can't fix it any other way... not just the vague notion of a problem someone else had for who knows what reason...

And the reason I want that level of detail is because I've failed to accurately predict such issues time and again too ;)


At my last job, my boss was very guilty of "At ____, we found x to always cause us problems", but he never really connected that "x" was a problem because of the context surrounding it, not inherently. We went through this really awful process of:

Manager either demands or rejects an idea based on a totally different company -> I spend days/weeks proof of concepting/documenting/testing/validating for or against his mandate -> He says something like "oh, great!", and then continues to spread the mantra that x would never work for us because of previous experience.

It was really tiring. Having to constantly document why a piece of technology for a startup in a totally different industry would yield totally different results than when he'd used it at a top 5 company with insane scale and a 500 person engineering team. Especially because most of his understanding was just overhearing people say certain things, so he just said them too, but didn't really know why.


Yeah it absolutely can be an endless process.

Human imagination is endless. You can't 'try catch' for everything that has happened sometime, even more so when addressing it at a high level without detail / nuance.

You easily can get nothing done.


> You easily can get nothing done.

That's exactly what we did. A year spent conducting experiments to validate or invalidate positions based in dogma rather than building, iterating and adding value.


There’s no one as dangerous as a new hire from “X,” where “X” is a well-known/esteemed company (FAANG, etc), in their first 3 months at a new firm.


It doesn’t even have to be somewhere notable, beware of the engineer trying to recreate their previous environment (and in yourself too).


In my career I've found this to be a huge huge huge red flag as well. If you can only build software one way, you don't really understand how to build it. If moving to a new employer means replicating a previous employer's practices exactly, I don't think you really lack the knowledge or reflectiveness to understand what was good or bad about it.


* I don't think you really have the knowledge or reflectiveness to understand what was good or not

Really wish I could just edit my comment.


These aren’t thought leaders. They’re insecure employees trying to justify their existence. You see it in most tech companies, particularly after employee 100 joins.

As a leader of an early stage high growth biz, it’s critical to prune the team as these folks emerge. It’s not a happy event, but not everyone is the perfect fit for their current role and sometimes tough changes need to be made.

Not making these changes leads to A players - the innovators who ship - having exactly the experience described in the post. And they tend to leave as a result.


> it’s critical to prune the team as these folks emerge

How do you identify them?


The observation of "foxes" and "hedgehogs" is interesting to me. If it's to be believed, trepidatious granular investigation beats blanket ideological perception in predictive power... I think we're tempted to use our broad lenses to predict things, because we find our worldviews can help us decide how to live in the chaos of our world. The sheer relief of believing "Oh, what I have to do is X," or "I sincerely think the world would be better if Y" gets overextended to fuel a source of pathological comfort. We can lose the present context in it, even if our ideology is actually broadly quite correct.


> because we find our worldviews can help us decide how to live in the chaos of our world

Simple answers conserve brain power.

I think we can keep using simple answers, we just have to apply different simple answers to different situations. Maybe one way we can do this is to collect simple principles in big lists and, when we feel like we might need to change our perspective on something, look through our list and choose a hypothetical principle to apply. This is basically the I Ching / Book of Proverbs, but you have to compile it yourself.


I agree that there are usually no general solutions but I think there are generally bad solutions out there.

Regardless of what you think of OOP (I think it's overused) ORMs are genuinely a terrible idea for a multitude of reasons even if a lot of people seem completely oblivious to other options (and as a result think it's a great option because if you can only imagine one option it's automatically great, at least that's how a lot of people seem to think).


Great article. The only nuance it misses is that not everybody is aligned with the "desired project outcomes". From a business (and often management) point of view, the "desired project outcome" is a product that works according to the company's established standard, delivered in a timeframe that is reasonable according to the company's standard. There are some other outcomes' properties that are hidden by that, such as: How better could the product work? How faster could the product be delivered? How satisfying is it for people to create the product?

All those people with their wishes and throughts which bloat one single project usually have precious insights on how to unbloat the overall processes of creation and maintenance. It's of course impossible to fix it all in the span of a single project, but managing every project with the "it works and was delivered on time" mindset is the best way to tank the overall productivity of the company and lose developers, because you missed the insights they had about to improve what they do, and do it well, and do it efficiently.


I have a harder time empathizing because I can see how the QA guy is literally trying to project what he's been given as mandate. And maybe the DB guy has the expertise to know better? Especially if there are company standards, there is value in that. Does the Devops team really need to have to support 'yet another db' when they've been building tools and support for a standard?

I guess it depends on context.


Tetlock also found that foxes were less likely to be famous because contingent advice is harder to explain in a sound bite.

Yep. Occasionally there will be a complex problem with a simple solution. More often, complex problems have difficult solutions. And unfortunately, If you're in a room discussing the problem and one person gives a simple solution and you try to start a conversation about the complexities involved and resolving sub-problems... Well, you could be the one who is wrong, but in my experience that's often not the case, though unfortunately people favor the easy answer. That said I have also occasionally seen the simple solution proven right. There's just no one size fits all approach.

Personally, I've been wrong both ways and right both ways in my career. When I get it wrong and I'm fortunate, I also come up with the better solution too. Regardless, I've learned to be skeptical of my own simple solutions too.


>Occasionally there will be a complex problem with a simple solution.

Whenever this has happened in my experience, it's come with the monkey's-paw irony that while the solution is simple, the reason why it's a solution, or at least a complete solution, is not. That almost makes it worse - people who aren't intimately familiar with the problem will see the simple solution, think the problem must have been similarly easy to understand, and then think that there must be something wrong with the solution.


> one person gives a simple solution and you try to start a conversation about the complexities involved and resolving sub-problems

A simple “that solution doesn’t seem to address $subProblemA or sufficiently handle $efgeCaseB” would do the trick, eh?


You would think so, and sometimes it does. Sometimes you get looked at as being too negative and people decide well let's give the easy option a shot. If it's wrong we'll adjust." That actually isn't a bad way to go either, as long as there's nothing time/resource critical at stake and an incorrect "easy" solution doesn't make things worse.

I've managed to head off some of those. Had a CEO who favored simple solutions, but would listen to options. And I had a good manager who brought me along to important meetings. The CEO would hear me out. Sometimes they agreed.l, and things worked out. Sometimes the opposite. Once though I did see them cut a Gordian Knot in a very simply way-- though for some rather painful. I still think it was a drastic oversimplified solution to a problem where there were better options, but others had had a chance at those better options and dropped the ball (basically ignored it), and the problem did have to get solved...


I despise the title "Thought leader" because it is often artificial and as the author wrote, it doesn't necessarily mean very much. It is also used by conference organisers to make their speakers sound more impressive!

Anyway, I think this scenario sounds very much like the "pigs and chickens" metaphor in agile development. An external thought leader is a chicken. They have no skin in the game and their ideas are easy to communicate without really understanding the context.

In many of these scenarios, multiple product teams are supposed to exist specifically to allow you to use whatever the hell DB engine makes sense to your team. The question isn't then "is squeaky better than bongodb" but which is suitable in this scenario? Will it perform well for this project, can it be supported by this team etc.

Perhaps engineering leadership likes the idea of homogeny across the entire company "we use MySQL exclusively", but is that really necessary? Let's ask the Thought Leaders!


Definitely true that confidence is inversely correlated to accuracy. I personally have derived this truth from my own personal interactions, and it's demonstrated by different scenarios and theories.

Confidence (and its cousin, charisma) being dead-ends-- that's a profound thought shift from the dominant perspective that confidence is a modern leadership quality.


Surely the problem here is the author taking every piece of advice as a requirement?! The person in charge of a project should not owe anyone an explanation for not committing to their advice nor be forced to implement all advice given to them.

Other than this, good article - I'll forever be more conscious of how I give advice going forward!


The article defines "thought leader":

> Uncontingent advice is what I think of when I hear the term thought-leader - someone has a single solution that seems to fit every problem.

Apparently it's a reference to folks who always confidentially advocate the same go-to strategies, regardless of context.


One thing that can happen is that you have more control and say over your own work than you realize. Not in every job, but people might be expecting you to be more authoritative than you’re behaving. You don’t have to go along with your peers ideas all the time.

It takes courage to exercise this.


This is an organizational failure and one that has been known for decades. If your entire org isn't oriented around the same goal of shipping value to customers, then they will just end up defending their own limited horizontal viewpoint. The testing team wants test coverage? Why do you have a testing team? QA should be embedded in your product team. QA should want to ship customer value in the most responsible way. Security should want to ship customer value. Even your content and marketing teams. They can express opinions, but the product manager can take it or leave it in terms of how it affects the actual value of the product. And quality, security and scalability are definitely components of value. CYA is not.


My favourite personal example of this was a "big project" I was working on in the early 2000's. There were two QA reviewers for my project plan:

One raised "serious objections" to the QA section, saying it wasn't detailed enough, we needed a full test plan, and demanding TDD be used for all development. They cc'ed the CEO with their review.

The second QA reviewer said the test plan was a "good high level approach" and that only thing they wanted to see was an explicit "design for testability" requirement for one of the critical, and hairy, modules.


Something I like to do when given “non-contingent advice” about a project, is to ask the person making the suggestion to review the published current requirements (functional and non-functional) and constraints, and submit a recommended change.

In decades of projects, I have yet to receive one.

When receiving offhand hedgehog like suggestions from someone important, e.g. the CEO, I tend to ask if they insist we pause and consider it, given that to do so would likely affect the cost and progress of the project.

In decades of projects, I have never been asked to do so.


Oh boy do I hate the term “alignment”. I get the concept of people operating in alignment but giving people veto power (“I’m not aligned”) versus decision power is a disaster.


Glossary:

Springy Search - ElasticSearch

beetleDB - CockroachDB

bongoDB - MongoDB

StopLang - GoLang

IronOxide - Rust

YourSQL - MySQL


Warble - Twitter


Iron Ore - also Rust


Thanks, for some reason I was thinking RubyMine.


The article was great, but this was an annoying set of substitutions. There's a perfectly cromulent way of creating technology-agnostic content: metasyntactic variables like "Foo" and "Bar". Instead, the simple substitutions made half the article a meta-meta-guessing game, first working out which technology the word translates to, and then trying to work out whether the author meant to rag on those specific user groups (some communities have a reputation for being more fanatic than others), whether they were meant as metasyntactic variables, or whether there was some other point in there. I couldn't work out which it was.


In my eyes, the substitutions he used really embiggens the audience of the article.

I come from a EEE background, and "Foo" and "Bar" just don't make sense to me in the way that i, n or f do. Sometimes Foo is a function, sometimes it's a variable, sometimes it's a whole framework.

BeetleDB, on the other hand, is clearly a DB client. It's on par with "Alice" and "Bob" in terms of clarity and minimal mindfucks.


Start a new project with technologies you are familiar with. Try new technologies on projects you are familiar with. Don't start a new project with new technologies.


i would say try new technologies on problems you are familiar with and focus on the problem space itself otherwise. that is independent of if this is a new project or not. if my goal is to learn something new i try to find a project in a problem space i am very familiar with, where i know the pitfalls and rough edges that make things harder than they should be. this gives me a much better understanding of whatever that new technology is about and if i really want to buy into it...

i.e i learned some programming languages by implementing a simple binary Usenet reader/client. i know pretty well by now what it encompasses to build that thing why it is harder to do with some paradigms then others and immediately see value whenever something turns out to be more elegant then previously. and if the task gets unpleasant i can usually state if that is because of the task or the technology or in this case the language i am using....


Learning is fun though, don't forget to learn new stuff...


This fox/hedgehog distinction seems to be a label that you apply to a person post-hoc, based on if you think they did a good job of predicting? I watched the talk linked in the article and it didn't seem to help. If somebody says something about, say, TDD and you don't agree, you mentally just say "oh, this person is a hedgehog about TDD" and then write them off? Doesn't seem like a useful mental framework.


Reading the other comments, hedgehogs look at everything through a single lens. Foxes look at it in multiple ways.

I've definitely met people who look at everything from their own position, and can't imagine how someone else would see the same situation from a different position. Programmer vs user vs manager, for instance.

"This is so much easier to code like this!" vs "This is so much easier for the user to use like that!" vs "This way meets our goals for the quarter!"

They are simply unable to see the other points of view.


Great advice, if every project followed these recommendations I predict we'd see a 200% increase in successful completion of software projects in our industry


One way to filter out people whose input you can safely ignore, is to see who stands to lose the most if you do what they ask you to do. People who've been through what you're trying to do are more likely to have their skin in the same game, and are more likely to give you useful, actionable input.

Not everyone who has opinions on your problem is a stakeholder.


> "The more specific and contingent the advice - the more someone says ‘it depends’ or ‘YourSQL works well in a read-heavy context with the following constraints’ the more likely they are to be leading you in the right direction. At least that’s what I have found"

This pretty much summarizes it.


This is a really good piece. I’m sorry that my comment doesn’t add that much to the discussion, but I just wanted to say that I really enjoyed reading this and seemed to confirm or put into words things I was on the periphery of thinking.


To me this article is less about not feeding them but more about smart expectation management.

Hear every one out and then prioritize their recommendations in a way that no one feels neglected or ignored. That roadmaap trick is really a good way.


Interesting topic but terrible written article.

Article is impossible to understand without knowing some TV series and cartoons(!), and some pop culture trivia from US :/

It could be done right with some kind of explanation of trivia...


I've never watched either of the two shows mentioned and I understood the article just fine. I'm not American either. The cartoon trivia is explained in the same single paragraph it's mentioned in, and the office trivia is explained in the single sentence that mentions it.


Along a lot of other good things, achieving the appropriate level of nuance is well covered in Adam Grant’s recent book Think Again. Highly recommended, and the author-narrated audiobook is excellent.


About once a year I come across a really great, well written article (IMO). This is one of them. Highly recommend reading this if you are a sr dev or above technologist.


hominem unius libri timeo - I fear the man of a single book


Good article, but needs more blockchain.


The art of saying no without saying no.


> He found that experts could be split into two broad categories, the first of which he called Hedgehogs. A Hedgehog had one big idea like free-market capitalism (or nordic model capitalism or demand-side economics), which they used as a lens to look at many issues. They applied this big idea to every situation, which resulted in noncontingent and straightforward advice. You always need more freedom, nutmeg, and unit tests. Hedgehogs are “Confident forecasters”.

> When all the predictions were added up and scored, hedgehogs lost out to his second category: Foxes. Foxes were the opposite of hedgehogs. They had complicated advice and were skeptical of even their own predictions. Tetlock also found that foxes were less likely to be famous because contingent advice is harder to explain in a sound bite.

This explains so much - unfortunately, because there's no fingers to point and no quick solution. Its just all of us and our stupid brains that always fall for the hedgehogs.


Best advice I've read all year.


What is beetleDB a reference to?


Cockroach, someone suggested. Feels plausible to me.


Sounds like the company where I work. Extreme lack of leadership.


beetleDB is mongoDB clearly.


Don't you think it's "bongoDB" that represents Mongo, and "beetleDB" references Cockroach? If not, why not; why should "beetle" be Mongo?


Collaborator: "Evidence across our organization shows when test coverage is above 80% software quality is typically improved."

This Guy: Shutup THOUGHT LEADER!!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: