I still see this thing of expertise being overvalued in many organizations, and it's completely toxic. I find it's more common and successful private companies, but it can happen at public companies too.
Typically, you get people that joined 10+ years ago, they implemented certain key systems / features when the company was small, nimble, and not too many actual customers to deal with. They probably worked long and hard to make that happen, probably largely without interfere Then, the company / product is successful, they promoted, have a kids, and end up being a full-time "expert" who writes emails and has opinions on present-day implementation details and changes, but is nowhere to be found when it comes to actually implementing anything or dealing with practical, day-to-day problems.
Meanwhile, the more junior, recent joiners who are doing all those things get no say in the direction of what they're working on, and eventually leave to be replaced by the next batch of new joiners.
I was that expert recently, but in a university lab, with undergrads as my juniors... I spent 90% of my time trying to convince people NOT to rip out everything and start from scratch; not to depend directly on database details, not to shove everything into mongodb and react, not to implement a new full stack for every subsection of the project they were working on.
One guy was supposed to have modify an android app to update repeatedly every 5s for a simple, one-user management view; started it by setting up a firebase instance
Another decided that rdbms was no good, and spent the entire semester trying to convince me to switch to mongodb, because "mysql can't handle the number of requests we're making" (~50/day, maybe ~1k/day long into the future).
A third guy decided the message queue was a bottleneck, and came to me with a proposal to reimplement it; after about 15 minutes, I finally pulled out from him that it would take an expected 6 weeks to implement, and EVERYONE had to stop working in the meantime. Simply looking at the logs, the message queue was clearly not a bottleneck, and there was no reason it couldn't be worked on while the current one stays useful...
My primary job was just keeping the project from burning down, let alone improved. And every one of those students thought I was holding the project back, as I desperately tried to maintain a working system.
That might have just been the inexperience of undergrads, but I have some sympathy for your "experts". Everyone's just gobbling up the marketing, confidently pushing ideas derived from inexperience and no one has any respect for history (at least, in my uni). I was also only one year split from the project, and it had changed from 2 developers to 20, so not the same scale as you're suggesting
It can't be overstated how important it is for more senior level members of a team to be aware of this happening. Very often a junior has "great" ideas, but they are based on very little factual data. They are not yet able to see the whole picture, and it is the job of seniors/managers etc. on the team to guide them.
Careful, though. I've seen many "more junior, recent joiners" want to do some pretty risky things with little upside.
Ideally, this is no different than being a parent. You want to help your kids not make mistakes. Thing is, they are the ones taking on the risks, you are not. Offer advice and work with them, but ultimately, you will have to let some of them make mistakes. If you are lucky (and many folks will be), some of the mistakes will turn out as hugely successful gambits. :)
"has opinions on present-day implementation details and changes, but is nowhere to be found when it comes to actually implementing anything or dealing with practical, day-to-day problems."
I see that a lot in my company. These guys basically have stopped learning and give advice based on what they did years ago. Only a selected few keep evolving.
In some areas I am at risk to also become the resident expert. I wish I could just hand it off to someone else but what do you do yourself then?
You tell your boss(es) about your concerns and onboard some new people, and you do something new. Either within the company; find a team that needs people or try to pioneer a new app, or change companies. Or accept being the expert and enjoy a promotion!
I've worked at a few places of vastly different sizes. From a company with only three devs, an office with three hundred devs, and a fortune 500 with over 3000 devs (not necessarily in that order). Maybe I've just been lucky, but at all of those places I've had conversations with my managers along the lines of "team x is doing something I find interesting and eventually I would like to do that, can you help me get there?" or "I have an idea for a project x that would be useful to us, can I block some hours to work on that?". Generally they have been amenable, and if not then I've moved companies. You need to be responsible for your own career, and I like working at places that encourage personal development :)
This used to work for me until around senior level but after moving up a little more (with corresponding salary) I find it harder to move around while keeping the salary.
I don’t know if it’s toxic so much as business as usual in any mature company. A lot of mature SV companies want to be different on this front but they just fundamentally aren’t.
I see it as the default, unless there's a strategy to do something about it.
I firmly believe any company needs to be pruning their employees on an ongoing basis - think Jack Welch's bottom 10%, or investment bank annual tidying before bonus season.
That doesn't mean employers need to be nasty about it, I think you can thank these people for their service, give them a nice severance package, a good reference letter, but you don't need to keep employing them until they reach senility and retire.
Management techniques come and go, but at the core of management is an understanding of systems/process and of human behavior. A really good book on management is Andy Grove's "High Output Management" [1], which to me strikes a good balance. It's a fairly popular book in SV and probably a known quantity to most HN readers. It's also notable in that it wasn't written by some management guru or business prof who's never managed anyone in their lives (i.e. Drucker), but instead drawn from the experiences of a CEO of a significant company (Intel). The blue-collar equivalent is Plain Talk by F. Ken Iverson, former CEO of NuCor Steel. [2] (not the author of APL -- that's Ken E. Iverson :)
More importantly make sure you are measuring what really matters and not a proxy. I shut down our code coverage builds some years ago when I realized management was using that as a measure - the potential negatives from measuring that is far worse than any possible gain from an engineering improving anything. The measure is still useful, but until I can be convinced it won't be abused I won't measure it.
What happens when a team's code coverage drops below the mandated minimum? How do different teams' coverage numbers affect their value ranking against other teams? What's going to stop teams from gaming the number with techniques like https://www.pavelslepenkov.info/?p=110 ?
Lots of net-negative consequences can occur when management decides to measure things. Lots of net-positive too, otherwise they wouldn't ever do it, but developer productivity proxies are notoriously hard, I'd question any manager trying to make one with whether they've ever done or read about Deming's red bead experiment (http://maaw.info/DemingsRedbeads.htm)
They wanted a measure of quality. We are an embedded system where we sometimes have to pay a tech to go our and update customer devices. Last time we had to do this the recall cost us something like 10 million dollars - that is just the price we paid the techs to drive to the customer and do the update, and doesn't count the engineering costs to create the fix. As such not having a recall event is important. (We do have customer installable updates, but for "reasons" some code cannot be updated by customers)
The negative is some people don't believe in through tests. They write a few unit tests for things they know are tricky. Then when their code coverage is low they get their coverage up by writing "tests" that take all the branches, but never actually assert anything. They know all the tricks to sneaking these bad tests in and the result is the metrics look good while hiding the fact that code is not covered.
Interesting. Sounds like tests were really needed in this case, but some people didn't want to write them. And the code coverage tools were just gamed. ie, an additional level of oversight would be needed there if you really wanted to ensure good test coverage.
I don't see how code coverage is making the situation worse though, it seems like it just wasn't enough in this case. I guess in that it resulted in useless tests by some people, it's a minor setback, but surely it encouraged others to write more proper tests.
Code coverage wasn't making it better. As one of the developers on the project, I already knew who was writing good code or not (but I couldn't do anything about it).
I've seen teams trying to get bonus points for patching thousands of files with errors. All this was done instead of innovating or writing a better module in a more succinct language that tackles the problem directly. I'd rather see employees finding ways to make sure those errors and infractions can't happen - often requiring different programming languages and tools - instead of just patching the code.
The root cause of the problem is believing that some technique can be successfully applied to all people in all departments in all companies at all times.
Well, good management books do serve a purpose: they provide you templates/mental models to reason critically from. Like anything else, you have to sift through the chaff and translate/apply to your own circumstances, but they can provide incredible leverage.
Most new managers have no idea what has been done before so they make all kinds of expensive and unnecessary mistakes. Having mentors in senior management can help moderate this, but that is assuming the mentors are themselves knowledgeable, which often isn't the case, especially at large corporations with many layers of management.
This is where understanding the management literature really helps. Reading any great book (e.g. Shakespeare) is like spending a few hours picking the brains of an incredible mind that you would otherwise have no direct access to.
Some years ago, I was at a company in a provincial part of the country where good management practices weren't widely known. There was a lack of intellectual curiosity about management within the company, so I had to look elsewhere. Books helped me gain leverage over managers who have managed for years but never thought to look outside their enclaves. I had the opportunity to try things out and refine in a small-scale setting. I've moved on since, but the knowledge I acquired during those wilderness years continues to be useful and practical in much larger scale setting.
I don't disagree that they are useful, just that they must be carefully considered and applied in situations where appropriate — not treated as a law that applies to everybody, all the time.
The same thing is true for diets, exercise, self help, etc. Turns out humans are pretty complicated and there is almost never a single solution that works for everyone, all the time.
> It's also notable in that it wasn't written by some management guru or business prof who's never managed anyone in their lives (i.e. Drucker)[...]
From the excerpted chapter of Doerr's book:
> Strictly speaking, however, his “objectives and key results” did not spring from the void. The process had a precursor. In finding his way, Grove had followed the trail of a legendary, Vienna-born gadfly, the first great “modern” business management thinker: Peter Drucker.
I've learned that one can recommend a book or guide without agreeing with everything in it, and Peter Drucker's MBO concepts (insofar as OKR adheres to them) are one of those things I don't agree with. Grove's book also has some clarifications about how OKR is balanced against long-term objectives, which lessens its ill-effects somewhat.
That Grove book is on my TODO list. Thanks for the reminder. As real leadership / management books go, I thought Creativity Inc by Ed Catmull was both interesting and entertaining.
The problem is to determine what matters. It sounds nice but especially at the bigger companies which measure pretty much everything today it's one of the reasons that there is no creativity. It optimizes short-term goals (and most of the time focus on money/engagement) and discourages long-term goals or riskier approaches. IMO that's because innovative companies turn into optimized earning machines but produce little useful.
Spot on. Problem solving is relatively easy, the hard part is problem identification. As a rule of thumb, The Five Whys is a wonderfully simple (but effective) tool for problem identification.
I am using OKRs with a small team, and the results are being very good. Basically it´s a very clear communication tool. When used correctly, the team simply knows the measurable things company expect of them.
OKR´s helps on "Mastery", one of the three pillars of the book "Drive: The Surprising Truth about What Motivates Us".
This book, "Measure What Matters", is my next reading. Thank you HN.
I recently finished Drive and loved it,and I'm also working with a small team where we recently implemented OKRs. Now working through "Punished by Rewards", and also really enjoyed Andy Grove's "High Output Management" (in spite of the title) and the original "Flow" book, in spite of its esoteric style.
Would love to hear what else you've read and enjoyed in this area.
I’ve been spending a lot of time on this and really found the book Traction useful. Thesis is that every role has to have some clearly defined objective measurables that can indicate to the employee and manager success. I have not been able to apply it to our developers and engineers but it has been extremely effective in sales (sales development, management and sales ops).
I know within two weeks if a candidate is on track, above or below and can predict growth quite easily and can immediately rehabilitate the situation.
Too often, OKRs seem to be tied to quarterly or bi-annual peer reviews. I find that cadence limiting -- especially in a fast-paced or small team. A compelling alternative is SMART goals, which force you to be concrete about the OKR goals: https://en.wikipedia.org/wiki/SMART_criteria
SMART is an acronym: Specific, Measurable, Assignable, Realistic, and Time-Bound. You can adjust the timing to be a bi-weekly sprint, monthly, etc.
If you read Doerr's book, he specifically said OKRs should not be tied to performance reviews, compensation, and promotion. Also, OKRs should be aspirational.
I'm listening to "Thinking Fast and Slow" right now. In it, the author talks about Gates school study, and others, as well as bias.
The author claims that small sample sets always produce the most extreme results, and while statisticians know this, measurements continue to happen using smaller data sets.
The author also claims that you could find data to support smaller schools being worse than larger ones due to the same issue.
Great book. It leaves me questioning the 'why' behind everything I think I know.
Careful with drawing too many conclusions from that book. Your take away is pretty ironic, considering that was Kahneman's problem: "I placed too much faith in underpowered studies" [1]
I've worked in a companies where everybody knows what matters, but everybody decided do measure what was easy to measure. So everybody optimized for make points in what is easy to measure. A lot of times it was counterproductive.
Intel has been on the decline for years now. They have tried to innovate new product lines and have mostly failed in almost everything they have tried to do. They are invisible in the mobile space, and soon even Apple won’t be using them in their products. They failed completely in IOT (Edison), still don’t have a 1:1 competitor with NVidia, and are going to miss out on the self-driving car revolution. They don’t really have any kind of Cloud or software business. Basically they’ve been riding on x86 and that gravy train is showing it’s age. They have recently been doing some cool things with drones, but I’m not sure that’s going to last and I fully expect it to fail eventually.
Operational style management is important when you have a product that needs to be tuned and improved incrementally. But that same style of management is ill-suited and emotionally bankrupt for creative types who are the source and inspiration of the product vision in the first place and it shows in Intel’s floundering roadmap.
One size fits all problem management and single minded solutions do not work. A process is a tool but not a replacement for empathy nor thinking.
With regard to the self-driving car revolution, last year Intel purchased Mobileye. Just today it was announced that they closed a deal to include their hardware/software in 8+ million cars. So don't count them out yet.
Not to discount any of your other points, which seem quite on point. Apple surely has been itching to replace them with their A series chips.
I read the book and it is OK. It provides some history behind OKRs but I don't feel it is a practical reference for using OKRs. There are better resources online. I like this introduction to OKRs:
It's a pop business book. You can get the info you need in 1 page (that's how most people learn how to do OKRs on the job), to you can read 200 pages of fluff that mostly serves as PR for the named parties.
To anyone who reads "Measure What Matters," I strongly recommend as an additional book "How to Measure Anything" - I think it'll be really useful in enabling people to measure things that they may have previously assumed to be immeasurable, which means that these things can then be better optimized for and improved.
I worked at a consulting firm that sells this sort of philosophy. A big part of my job was explaining basic concepts from statistics and measurement to MBA types. A big problem with quantitative management approaches is that the people who are in charge of implementing them have weak math/statistics ability.
I used OKRs at Google for years, and I love them. I have never really thought to apply them to my personal life, but I remember people at Google having personal OKRs they published. Having read this, and thought about it outside the work environment, I think I might do the same in my life.
Could someone please help me? I read an article a while ago about how someone in the gates foundation was using bayesdb/bayeslite to evaluate risks of investments. I have tried every search I can think of on hn.algolia.com but it eludes me.
Another data point is anyone who’s familiar with the workings of the Gates Foundation knows it’s a snakepit. Yes, Bill and Melinda mean well and it’s far from Zuck’s scam foundation, but it’s hardly an efficient organization.
If by "laughingstock", you mean of the millions of lives that his foundation has saved from infant mortality, thus giving those folks the opportunity to smile and laugh, then I agree with that sentiment.
Kind of with you here. And Paul Allen is a keeping busy [to put it lightly] as well. (Quincy Jones even said he was the only white guy he's met who can play guitar at all. That's high praise)
I think the challenge here is whether Gates should be offering advice on management, as opposed to business. The rise of Microsoft (prior to Ballmer) was famously due less to Gates' skills as a people / organizational manager than to 1) the company's timely dominance of the desktop and 2) expansion of that dominance into monopoly.
As such, why should Gates' endorsement on topics of management or leadership deserve undue attention now?
I assume the book is supposed to tell me what matters, because based the blog sounds like the idea is to decide what matters and measure that. Well, maybe you have the wrong idea on what matters, and even if you do have the right idea, you may have the wrong idea on how to measure it. And even if you do have the right idea on how to measure it, you need to interpret it, put it into context. So many turns at which this can go wrong.
If the book doesn't give some really good explanations for these things, it is akin to telling someone to eat healthy: I don't need to be told to eat less sugar, saturated fats, and more leafy greens. Actually implementing that diet in a way that I can keep up permanently is trickier than one might suspect, intuitions tend to be way off. Good advice would actually help me with that.
Typically, you get people that joined 10+ years ago, they implemented certain key systems / features when the company was small, nimble, and not too many actual customers to deal with. They probably worked long and hard to make that happen, probably largely without interfere Then, the company / product is successful, they promoted, have a kids, and end up being a full-time "expert" who writes emails and has opinions on present-day implementation details and changes, but is nowhere to be found when it comes to actually implementing anything or dealing with practical, day-to-day problems.
Meanwhile, the more junior, recent joiners who are doing all those things get no say in the direction of what they're working on, and eventually leave to be replaced by the next batch of new joiners.