This is great insight. I'd like to offer a slight alternative for "You just kind of have to minimize the wrongness".
I think a CTO should have several mental models for making decisions. Minimizing wrongness is just one.
Some examples:
- Is this decision reversible?
- Does this choice allow my team to grow technically?
- Will this matter a year from now?
- Is this something I can buy instead of build?
- Is this a core competency that we should invest heavily in?
- Does this decision go against the company or your personal values?
- Is quick and dirty good enough for this?
There are several ways to orient the problem to help make a decision. But the OP is totally correct, this decisions will usually be made without all the data and with a lot of people looking at your decision.
Reminds me of this excerpt from an Obama interview:
> When problems reached him in the White House, he said, it was because they were unsolvable. He generally was being asked to choose between two bad options. “By definition, if it was an easily solvable problem, or even a modestly difficult but solvable problem, it would not reach me, because, by definition, somebody else would have solved it,” he said. “So the only decisions that came were the ones that were horrible and that didn’t have a good solution."
This is only true where the system at least somewhat works. In other places you get police press conferences where they say "The president/governor has already given us orders to investigate the crime".
How can it be "by definition"? Is the president defined as someone who picks unsolvable problems? Are the subordinates
defined as people who solve every single solvable problem?
If you are in an environment like this, you need to empower your engineers to make those choices: it will be more efficient, they'll feel better about the compromises they make themselves, they'll learn more from what happens, and you'll reduce the load on other people, like those seniors, architects and the CTO.
Reminds me of what Jeff Bezos said about Type 1 and Type 2 decisions. Type 1 decisions are one-way door decisions that needs to be deliberated deeply and consulted on with higher-ups. Type 2 decisions are two-way door decisions that small teams can be empowered to make.
There is a balance between empowerment and the higher-ups taking responsibility. I've been in an organization where-in the higher-ups just abdicated all responsibility and avoided making any technical decision (maybe for fear of making a wrong decision) and so the small teams had to make every decision which led to chaos and lack of direction.
Sometimes, it is the job of the architect or CTO to make those big decisions; their job is not to code, their job is to weigh the possible options and make a decision to give direction to the team.
> Reminds me of what Jeff Bezos said about Type 1 and Type 2 decisions
There's a cautionary tale buried here. Many seemingly Type 2 decisions are actually Type 1 decisions in disguise. Case in point, Amazon's decision to not allow warehouse workers to have their phones when working in the warehouse has resulted in the 6+ deaths and many more injuries that we saw in the tornado last Friday. Now there's no going back, and Amazon may (and should) be held accountable. Pretty grim for a seemingly Type 2 decision.
I don't know anything about the details of Amazon's policy in this instance or the specifics of what happened, but strictly going on what you wrote, I'm not sure that this tragic outcome necessarily means the decision was bad.
There are multiple ways of looking at any decision: perhaps, for example, employees with cellphones were more distracted and thus more prone to accidents.
My eleven-year-old daughter has no cellphone because we don't think being connected at her age is good for her (she also seems indifferent to having one, unlike my son, who is all about being connected). We also let her walk home from school on her own, and have for a couple of years now, because we believe she should be independent. I can imagine (although I try not to) situations in which those two decisions interplay and lead to a bad outcome, but I still think they are the right decisions for her.
This is nothing to do with one-way/two-way door decisions, you are conflating that concept with an understanding of "unintended consequences" in order to take a cheap shot at Amazon. Ordinarily I'd enjoy that as much as the next person, but this case is too ham-fisted to leave unchallenged.
You're missing the point. The point is that even the most mundane decisions can lead to irrevocable negative situations, so one should think about worst case scenarios with every decision, as engineers are regularly trained to. Classifying things as type 1 or type 2 decisions can create a blind spot, as it did in this situation in my humble opinion. In reality anything can become a one-way-door decision.
The "dunk" is a side effect. Amazon's decision making on this issue does a really good job of illustrating situations where you make a seemingly harmless decision that you feel you could revoke at any time, but things take a turn for the worse and regardless of your ability to revoke the decision, you can't revoke the damage it caused, the preventing of which is the whole purpose of having this system of type 1 vs type 2 in the first place. AKA the type 1 type 2 system is imperfect and can lead to miscalculations, like this. A more useful framework might be "can I prove, convincingly, that there is a 0% chance this decision will lead to irrevocable significant negative consequences, if so then it is type 2, otherwise type 1"
A proper 1 way door for your example would be to build warehouses in a way where cell phones don't work. Once people have died, you cannot make a change to allow cell phones quickly and cheaply. You're stuck with the warehouses, and either have to stop operations and build new ones, or accept the risk that more people will die
2 way doors aren't about making bad choices, they're about the response time and cost for when you identify that the choice was bad. You can make bad choices for 1 way and 2 way doors.
The best example I can think of for your argument is the flint water system. What looked like a 2 way door was actually a 1 way door, when the choice to switch water sources completely destroyed the pipes beyond usability, and any water flowing through the pipes would be contaminated, regardless of the source.
A 2 way door equivalent would stop being contaminated once they switched the water source back, even though people had drunk contaminated water
Not every quote about Bezos/Amazon is an opportunity for an off topic virtue signaling “dunk”.
If they are breaking the law lets prosecute them. If there is a law you want changed vote for a legislator to back it. If you don’t like them, don’t buy from them. This isn’t complicated.
It sounds like you’re saying that ethical/moral responsibility does not exist outside of what’s required by law in any given jurisdiction and the only venue for change is lobbying legislators? That it’s OK to do anything imaginable as long as it’s not explicitly prohibited by law?
I am saying hijacking a comment on decision making to leverage a tragedy in order to make some thoughtless “amazon bad” claim is kinda weird.
There is no right to your phone at work. Should there be? I dunno, probably not, but if you feel there should be then vote for it.
Did Amazon do “the right thing” I dunno but that reply isn’t bringing any clarity to that discussion either.
I agree we should have moral/ethical duties. Unfortunately society has degraded to moral relativism mixed with tribal absolutism. So what that would even mean these days is fraught
I accept all of your premises but disagree with your argument -- yes the lack of an emergency alert system is a factor, yes cell phones have a built-in alert system that could have handled this, and yes Amazon chose to suppress this alert system from working by denying employees access to their devices. They took away the only existing alert system and didn't replace it with anything. Garden variety negligent behavior leading to deaths.
Its not an argument and we came to the same conclusion. The case needs to focus on the lack of emergency system instead of getting thrown out unceremoniously by a judge who says "whats this got to do with cell phones"
What GP means I think is the decisions that need to be made at CTO's level are generally going to be tough ones by definition. This is after empowering engineers etc... so easy ones are handled by them. Otherwise the CTO would be swamped.
This is inline with what I was thinking, but the Type 1 and 2 Decisions that a child comment outlines enriches my post in a way that I really appreciate
And, if you can't figure out which is least-wrong, just make a decision, any decision. Don't become a roadblock. If the problem has already gone from junior->intermediate->senior->architect->cto it's already taken a whole lot of time and energy. It's software - you can fix almost anything (except certain security, reliability and financial controls) later if its wrong so making any decision is more important that making the optimal one.
I've definitely worked at startups where the CTO was the blocker for the longest time and we'd wait weeks on them when another senior people could have done the work immediately. Try to delegate and trust people as quickly as you can.
The nuance here is if you're making a decision that has a gross positive value, then you likely have a large number of "wrongness" choices that still result in a net positive. If you're not sure which of them to take then take any one that has a positive EROI (the ambiguity makes them all appear similar returns). Remember that EROI is marginal so even if you end up in a lossy situation _after_ the decision, so long as it's less loss than before you're net better off than you were.
If you're making a decision which all decisions seem to have a net negative EROI, then there is no opportunity at hand. Keep looking for a real opportunity.
> it's already taken a whole lot of time and energy. It's software - you can fix almost anything
Depending on the problem, it may be you need to experiment, research or call in an external expert. As a leader, you may need to make the call as to which approach but the execution itself should be handled by the problem owner.
There are some jokes that for most situations Supreme Court Justices aren't better than flipping a coin. Once a decision reaches there, it's so fundamentally ambiguous that it can't be resolved any better than flipping a coin.
*there are some things the supreme court does around consistency which aren't like this.
This is true for all responsible people, the way I tackle this is to try and predict the future.
Recently I had to implement interpolation in my game engine, before looking at that I tried laying out animations in memory and noticed a 3x speedup. So when it was time to decide I took the seemingly coward route of exporting animations in 120 FPS instead, hitting disk ×5 and RAM ×2 but saving CPU ×3.
Time will tell if this was the right choice, ie. will games render at >120FPS. The debt was moved from technical to energy saving, from complex to simple, from now to the future (since exporting in 120 FPS takes a little more time every animation that goes into the game)
I'm going to keep investing in low power because that is more guaranteed than everything else.
Leadership is hard and it's hard because you're making bayesian calculations on suboptimal choices. Almost by definition.
Say a Jr. dev has a problem that they can't solve. They kick it to an intermediate.
The intermediate can't solve it so they kick it up to a senior.
The senior can see a couple of solutions, but want's to run it past the architect.
The architect takes these options, weighs them against n number of considerations + the dev roadmap and sees strengths and weaknesses to all of them.
They package those pros and cons and brings them to you, the CTO to make a decision between these sub-optimal choices.
In this kind of environment EVERY move you make is going to be a little bit wrong. You just kind of have to minimize the wrongness