Hacker News new | past | comments | ask | show | jobs | submit login
OKRs from a development team’s perspective (zafulabs.com)
317 points by thinksocrates on May 24, 2019 | hide | past | favorite | 120 comments



This put my thoughts on OKRs into words better than I ever could. At my last two companies, I've been beating the drum that the closer to an individual level you get the less useful OKRs are. The backward process of, "we know what we are going to do, how to we make it fit the OKR formula" drives me insane.

Personally, I actually like OKRs at the organization or large sub org level. I think they are great at getting everyone moving in the same direction and letting everyone know what matters. My problem is that as you drill down into an organization you need to switch from the "Why are we doing this?" and "What are we measuring?" questions and instead as "Exactly how are we going to do this?" which doesn't really fit into the ORK structure. Personally, I find that once you get down to a group of about five people or less, creating a list of tasks is far more affective than OKRs.


Exactly. Individuals are far away from the actual organization goals. In my yearly review I have to categorize my achievements by stuff like "Supporting growth", "Fueling worldwide expansion", "Increasing customer satisfaction" and so on. I am very far away from any of these so either I indirectly support all of them or you could also say I support none of them.

Just filling this out makes it clear that most of the company is about bullshitting each other.


This sums it up well. If all your achievements, with the exception things like professional growth, was not directly derived from these high level OKRs then middle management has failed in the assignment of work.

The hilarious thing where I work is at the end of each week I have to manually enter the hours I worked and I have N charge numbers to split up those hours. If you read the line item for each charge number it's basically a synopsis of one or more OKRs.

All this to say, from an engineer's point of view it seems with just a few tweaks in JIRA and Gitlab, review input could be created by running a report on commit history over the date range the review covers.


Spotify decided to stop using OKRs for individuals for these same reasons. They wrote about their experience here: https://hrblog.spotify.com/2016/08/15/our-beliefs/


Having worked with hundreds of companies on their OKRs at Weekdone, I've seen many hack OKRs on individual level and use them "wrong" by having the KRs as milestones, tasks and projects. Yes, absolutely illegal by some leading OKR consultants and thinkers. But works like magic to keep goals in front of people.

Having the choice of having a developer or designer a visionary KR of improving x or y by z% vs getting a project, milestone or task done, in many cases the latter keeps them focused vs them rolling their ideas on the % KR.


> Objective – Increase Customer Retention

> Key Result #1 – Lifetime Customer value increase from $N to $N+5

> Key Result #2 – Decrease Customer Churn Rate by 10%

I'm a developer on a team. How am I supposed to know why customers are churning? I'm three levels removed from talking to customers when they cancel. I don't have a deep relationship to know how to add value to them.

Someone needs to do the research, analysis, and leg work of finding potential areas to exploit. And then someone needs to have at least some amount of vision or inspiration about how to solve that problem. Or do whatever trendy new "design sprint feedback loop" brainstorming session someone is tweeting about. This is the kind of stuff I hear designers and "product people" talking about wanting to do.

If you want to do that work and then loop in the engineering team to talk about feasibility and planning that seems fine, but the last time I worked in an environment with OKR no one seemed to have any brilliant ideas about how to, you know, actually achieve the Key Results.

(Until these questions can be satisfactorily answered to developers, OKRs are going to be perceived as a buzzword, flavor of the month process air-dropped in because someone saw a blog post about it)


I believe it should work like this

- Board to CTO : Increase Retention (decrease churn)

- CTO to DevLeads: * Build a daily report emailed to the board showing 30 day moving average of churn

* Build a business event log - every time a user does something on the system log it to a easily queryable system (customer signs in, customer raises invoice or customer deletes widget) This can get very deep very fast. Start with Graphite / carbon and get more sophisticated later.

* hire a data scientist / convert DBA into one and find correlations between customers who churn and events in the log. Not logged in for 30 days looks like a good start.

* Write split testing into the (SaaS) app such that we can randomly segment customers at risk of churning (indicates by event activity) and see if we can keep them

* also add in "have you tried this feature emails", or "holy crap what do you mean the invoice page does not align right anymore"

All of these things (for a SaaS app) are doable projects for any development team.

This is of course based on the idea that the Board has told the CTO "fix this thing as top priority". If they have not that's their problem. The CTO should then go to the board and say "I am going to fix this thing as top priority"

Then we start the fun job of actually monitoring what developers do work on compared to what we planned to work on. Most times priorities chnage, legacy weighs us down and friction burns is.


Or you could just fix all the bugs that are upsetting customers and causing them to flee.

I've been in the situation where the metrics were used at a detailed level. Net Promotor Score (NPS) was used by the company and I tried, somewhat successfully, somewhat unsuccessfully to use OKRs at the team level. I agree with everything that has been said in using these to set the overall the direction and strategy at a high/medium level but down on the front lines it's very hard to do anything that will move the needle in the right direction in a clear cause and effect way.

If you try and map tasks to strategic goal then you just up window-dressing everything to keep management happy and since the components of goal X are many any varied your chances of success are limited.


> If you try and map tasks to strategic goal then you just up window-dressing everything

If you don't understand how your daily work is related to management's priorities then probability is high that you are going to do a lot of work that isn't valuable to the organization.

So, either it should be a trivial exercise to rationalize how a task is related to the objective, in which case this is nothing more than a small overhead of working in an organization. Or if you find you have to make leaps of faith to make the connection to the objective then that's a signal that you need to consider why the task needs to be done at all.


>>> fix all the bugs that are upsetting customers and causing them to flee.

Well yes. I am just posting that it is possible to take a top level metric and build a backlog that represents sensible solutions to that metric.

But you first need

- a top level metric (preferably that measures what will make or break your business) - a way to determine what things under your control drive that metric.

But yes, the things you do in the trenches will probably not move the needle far. That is for two reasons

- at some point the code base is so big that doing "one thing" won't make impact (i think this is around the 100k SLOC) level which is still fairly small

- and even if you can affect the whole code base, the code is at the bottom of an inverted pyramid of "leverage to affect the business" - the CEO can chnage the business far far more be deciding to triple the price tomorrow than any bugs you fix.

But yes - in the end, if you have a working product right now the best thing to do is to go find real customers, work out why they are upset (either with clever telemetry analysis or just fricking ask) and go fix that bug / missing feature.

If you don't have a working product there is no telemetry so ... fricking ask.

But find what's not working and fix it is a good plan. If what's not working however is "the business model" we are in interesting territory

I think a non working business model is exactly the purview of software. I think that we shall replace all non-coding business people with coders who can business in a generation. But that this generation will see real opportunities


> How am I supposed to know why customers are churning? I'm three levels removed from talking to customers when they cancel.

I'm always amused by where one job ends and another begins. It seems strange to me that every web software company in the world is looking for "full stack engineers" -- you need to be an expert at everything from CPU instructions up to the CSS3 color module and its implementation in the browsers that our users are using -- but doing research and analysis of user behavior is off-limits. That's where we're drawing the line?


It comes down to allocating resources. If your developers are tasked with doing research on churn what are your product/ customer success people doing? Obviously depends on the size of the team on the product.


Because we're too expensive to waste on something another employee could do. The bitter irony is we're so expensive largely because being siloed leads to wasted effort and rework.


Yeah, this wasn't at all what I thought it would be about, which is the difficulty (impossibility?) of measuring developer impact against key results.

When you're a salesperson, you don't have to agonize over how to illustrate that you've "Decreased Customer Churn Rate by 10%". When you're an assembly line worker, you don't have to find a way to figure out how you contributed to "produce 15% more widgets". You either accomplish these things or you don't - they are literally descriptions of your performance in your job. Either way, nobody cares how you did (or didn't) do it.

When you're a developer, how are you supposed to show that you've helped "increase customer value from $N to $N+5"? Because "shipped the new version of the message queuing system" is not in anyone's list of key results.

OKRs feel like "organized SMART goals", and so I have the same criticism of them as I do of SMART goals: they're just another way of conceptualizing goals around things that are already easily and directly measured. No one has made any progress in quantifying the contributions of roles that don't have direct percentage impacts on dollars earned or dollars saved.


If that's what your OKRs look like as a developer, then your company is doing them very wrong. They're supposed to be hierarchical, becoming more concrete as you go down the line. Your examples sound like top-level OKRs; mine are usually something like "internationalize feature x, launch in Y locales" or "Implement feature Z, measure effect on meric A".

Which is not to say the OKR system doesn't still have issues, they just look more like the ones discussed in the OP.


Yeah, if that's how it supposed to work, then I've not been anywhere that does it right. We're always working ones like what I mentioned in the post.


Yep, this would be good feedback up the chain. People above you should be working on breaking the company level goals into team and role specific ones.

Edit: just to note that I have some other criticisms of OKRs, but having them be way too high level, broad, and not actionable should not be the problem.


Please share your criticism. I would be happy to get as much perspective on the topic as possible.

The problem I encountered was having, or the notion of wanting, a product roadmap which is produced from stakeholder, C-level, product- and IT team input parallel to OKRs. I feel this is an anti pattern. You have OKRs and they make your quarterly roadmap or you don't apply OKRs at all to the producing team.


That would be nice. My intuition is that this doesn't happen because the person in the hierarchy who translates "improve customer value by x%" into "internationalize feature x, launch in Y locales" is effectively taking responsibility for showing that the latter impacts the former, which is the problem I find to be intractable.


If you’re not making some effort to measure the value and impact of what you’re doing, then how do you know if it was the right thing to allocate your effort on? Presumably the new message queueing system had some effect on customer experience or developer velocity - can you try to measure that?

It doesn’t have to be perfect, but most teams would have a lot more impact if they spent some of their time getting at least very rough estimates of the impact of their current and future projects.


As a concrete example here you should be able to translate a new queueing system into something that has business impact.

Maybe the new system requires 75% of the servers the previous one did, leading to increased revenue. Maybe it’s quicker, resulting in customers experiencing better service, and so churn is reduced from the pool of people who said “I like it, but it’s too slow”.

A queueing system in itself isn’t of any value to the business as a whole.


Yes, but at some point there's serious diminishing returns on having every layer of the organization forced to rationalize their behavior this way.

If you're an average developer at a non-startup you've likely been asked to put together a queuing system. You didn't decide to do that work, and you probably shouldn't be spending weeks trying to gather the information you might need to justify that work.

Your job wasn't to figure out 13.2% of your customers opt out of your paid reporting services because they experience slow responses and unrecorded data.

Your job is usually much closer to -

PM - Can we make this service faster? We're losing customers on this feature because it's slow.

TeamLead - Probably, we can re-implement our queuing to be quicker and more reliable.

Dev - Ok, I'll investigate [x] queuing library or service

Then you should be spending your limited time and energy on actually producing that result. The technical task is usually quite complicated (ex: here's just the table of contents for RabbitMQ https://www.rabbitmq.com/documentation.html).

The justification for the work wasn't your job to put together (although asking sane questions is usually a good call). Your justification was simply "My PM/TeamLead asked me for it".

----

I sure as hell don't want every junior dev on my team going out and trying to tease out the intrinsic business value of every task I give them.

That's a waste of my resources. That doubles up the effort that I already expect my PM to be doing. That leads to disagreements about priorities when those junior devs don't have the context about why a business decision was made and either infer it incorrectly, or spend lots of time asking when it really just doesn't impact them all that much.


OKRs are normally set at the team level and above. IMO individual OKRs are an anti-pattern, unless you’re using them for individual development goals (complete this training, etc.)

Determining the business value of your team’s various goals should be your PM’s responsibility, with input and help from your team.

In your example interaction, the only missing piece is a more specific impact estimate. Rather than “We’re losing customers on this feature because it’s slow.”, you’d want your PM to say “If this feature was X% faster, we estimate that it would reduce churn by Y% per quarter, which is worth approximately $Z/quarter to the business.” Your team can then estimate eng cost to make that improvement, and see where the benefit/cost ratio falls relative to the other things you can be working on.


> I'm a developer on a team. How am I supposed to know why customers are churning? I'm three levels removed from talking to customers when they cancel. I don't have a deep relationship to know how to add value to them.

I hope no one is expecting every individual developer to know the numbers on churn, but I do think it's important that someone on the eng team would know that number. In my experience, some combination of the product manager and the tech lead for the team should have some insight into how the changes the engineers are making affects the customer.


If churn is a priority from leadership (and it should be, for any decent sized product), I would expect everyone to understand the factors that play in. You need all your engineers to understand the strategic goals, they are often the best positioned to suggest ideas for fixing them (or can prioritize fixes that are higher impact to priority areas).


And then you have to do the hard thing, which is listen to the engineers (which is why most business folks are fine with keeping the siloing as is.)


A lot of engineers talk about how business people should listen to engineers.

A lot fewer engineers talk about how they like to listen to business people.


Well, if they dont they get fired :) and I dont like to talk about the business doing things, just working this weekend because they didnt listen for the last six months.


Engineers think (sometimes incorrectly) they could do the job of the business people, in a pinch. Business people know they can't do what an engineer does. This dynamic contributes to why business people respect engineering on engineering decisions, and why engineers don't respect business people on business decisions.


I love listening to my engineers. They usually know how to solve something faster and more elegantly then I do.

But I'm someone who will learn enough Node/React/python to understand what can, and can't, be done.


“I'm a developer on a team. How am I supposed to know why customers are churning?”

OKRs are designed to exist in levels, “trickling down” in a way that narrows them down more and more as they reach specific teams and individuals[0].

A goal such as “Increase customer retention” could be a legitimate higher-level objective. From key results on that objective, we get objectives for specific teams working on various aspects of the product.

(For example, “Reduce churn rate by X” likely isn’t going to be just about engineering; copywriters and others may be involved. Down the line at some point there may be an engineer’s personal key result such as “launch customer feedback collection system by X date”, or something else specific to circumstances.)

I believe that wholeheartedly adopting OKRs in this multi-level fashion is helpful even to companies with a just a few employees, and how objectives and results are translated across levels is a good measure of management health overall.

[0] Rick Klau talks about it in “How Google set goals” https://youtu.be/mJB83EZtAjc?t=1951 (2013)


To be honest, I've rarely seen this work well outside of Google.

Part of it is how the goals are translated. A business outcome trickles down to a specific technical one -- and a specific team and person -- which on the face of it makes sense, but it's surprising how often pursing that that derived outcome totally loses sight of the big picture.

It's similarly hard to map backwards, which can lead to a lot of the company feeling "mission accomplished", when the objective was still a total miss. That's a painful disconnect to have happen.


That’s a huge problem with the whole idea of providing value to the business. Some people can show real metrics but a lot of us are several layers away from anything quantifiable so either you have nothing to show or you have to make up some bullshit metric as I often see in resumes.


Slightly off, but the whole "business value" in IT resumes is proper a US thing. Cross the border and you're good with just mentioning what you did, sans the (often imaginary) "impact".


> I'm a developer on a team. How am I supposed to know why customers are churning?

One approach is to keep in touch with customers and find out what their pain points are. Everyone should be interacting with customers to some extent. This could entail helping out with some support requests (escalations), or joining customer calls, going to conferences, etc. Listen silently to sales calls, or support calls with important customers - or be a named and introduced participant. If you have a mailing list or forums, keep tabs on that and help people. You can usually get an intuitive sense of what’s important to customers by interacting with them.

Sure, maybe it’s “not your job” to do these things, but a bit of time spent more than pays off in insight most of the time. If this is difficult to arrange, then the next best thing is to talk to the people who themselves speak to customers, and listen to what they have to say.

This isn’t a replacement for surveys and analysis, but I find it useful to have my own intuition as a human based on interacting with other humans.

It’s also good to be a customer or user of your own product. I strive to be in the position to use the things that I’m working on first-hand myself. If your product has an involved onboarding or setup process, then ideally everyone on the team should have gone through that setup themselves personally.

Most of the product design vision that I’ve developed for products I’ve worked on has come from a studied understanding of the customer problem and interaction with real customers. I think the best way to stay customer-obsessed is to ensure you’re interacting with customers, or are one yourself.


Also what's the point of arbitrary numbers like "10%" and "+5"?

You should obviously increase lifetime customer value to the optimal value (after accounting the cost of increasing it) and likewise for churn rate.


I think OKRs are intended to mitigate the lack of customer awareness exhibited in this post. If you’re having that hard of a time linking your efforts to your paying customers, that sounds like a problem.


I once worked somewhere that had in-house customer service, and they gave developers the opportunity to spend a couple of days every few months working with a representative to respond to tickets and other first-line support stuff.

It was very valuable. It turns out that listening to people is a good way to figure out what they are frustrated with and what they want to see. Who'da guessed?


If you are a good developer you understand the customer a little.

You are the first line of defense against things that seemed like a good idea, until they are implemented. You should pay attention to what you are building, once in a while (not every developer will have this happen) you will see something and go to your boss with a "stop, look at this, it we should cut our losses now because it is bad for the metrics".

You as an engineer know what is possible. I've seen several projects go from ideas that management/marketing thinks are too difficult so they don't bring it up to the top must have feature when an engineer who understands the customer sits down and writes it in a couple days thus making them realize their idea was actually easy if only they had asked.


Aside from transparency and common goal, the most beneficial side of OKRs is to allow any individual to ask "does what I'm currently doing move us closer to the KRs?" and "is this the most important thing that I can do to contribute to the KRs?"


Inspired by * I will elaborate a bit on what I think would be an OKR-approach for that. It could be a goal like "How am I supposed to know why customers are churning?". Then your key results would be, not the metric that says "Working on that", but the metric that helps you with answering the question in a way that proves or solves the puzzle ideally. There are many things that are solvable and for these the OKR recommends "Committed OKRs" while for things that are unclear there is the "aspirational OKRs".

Say, for the above: KR A being test the hypothesis for churning using XYZ analysis method and KR B bring peer review that with specialist named Klopvital, Tartirius and Mochalatet. Then, once you improved an answer, I would think you have achieved a partial goal - and if that is data to someone else then it comes the choreography of things.

. . .

The whole reason of a system of goals is to have one work and what OKR helps relates to accountability - what is a measure that validates the goal. Of course, the complexity comes when one goal system is chained with the other - the orchestration.

Your case "Someone needs to do the research, analysis, and leg work of finding potential areas to exploit." partially opens the door to it for the idea of Committed OKR (the kind of OKRs that one can do, solvable).

On your point "And then someone needs to have at least some amount of vision or inspiration about how to solve that problem." this seems to be about the idea of Aspirational OKR. In this regard, I agree that many things in the entrepreneurial outset starts with a north and not a precise thing. Certainly OKR is not a system to work on requirements - do it and that is that. The OKR approach is influenced by short feedback reviews for goals — that is derived or inspired by Andy Grove insights about MBO vs feedback vs planning that goes on like a) point to a roadmap b) work on it with a temporary plan in an accountable transparent way c) review the plan and review the roadmap.

But I hear your strong statement that "but the last time I worked in an environment with OKR no one seemed to have any brilliant ideas about how to, you know, actually achieve the Key Results." and would love to talk with you to understand more your experience. From the book I read I came upon cases that some of the success cases too years to fix their OKR system.

* Some of the ideas here I was inspired by the Measure What Matters book by John Doerr. But of course it's my limited interpretation.


> How am I supposed to know why customers are churning? I'm three levels removed from talking to customers when they cancel. I don't have a deep relationship to know how to add value to them.

Most design decisions you make as a developer affect the value users get from your product, and therefore the churn rate. Sure you can get more useful data by talking to customers, and you should seek it out. But even lacking that, it's up to you to put yourself in the place of a user and make the most practical decisions available for their benefit that your powers of intuition allow. You don't get to say, I don't have the best possible data for that choice therefore it's not my problem. It's still your problem.


I think there's a dual problem here.

First, a developer putting themselves in the place of a user still may not be able to see all of the issues with their product because they are (not sure how to put this better) "too close to the work". It relates to something we talk about here on HN every now and then, namely that "technical people" often miss things that are hard for others because they just can't fathom that x thing that's easy for me could be hard for someone.

Second, I think that, in a lot of places, management makes it super hard for developers to get any data on problems other than general, abstract numbers with zero specifics included. I seems quite unreasonable to expect a developer who is being stonewalled on data or access to users by another business area to work with "our overall churn rate is x%, fix it" and actually figure out what's wrong and what needs improving.


It's the right attitude in spirit. But like how am I supposed to know how to decrease no-show rates at a clinic? How can I figure out how to increase harvest yields from autonomous tractors? Convert more enterprise sales accounts?

I know that the answer is: spend lots of time talking to customers, doing research, empathizing, etc. But then who is going to write the code while I'm doing all of that? It needs to get done, but I'm not sure why that is the job of the "development team".


One thing with OKRs is that the key results are supposed to be in your control and responsibility. You don't have control over the no-show rate, but someone has responsibility to improve that as part of their own role. If that has made it to your team, then something is probably broken. Most likely that no-show rate is part of an OKR for clinic managers or someone else. They'll come up with ideas (or consult with you for ideas) and your team is responsible for implementing the improvement or running experiments to see if it actually achieves the desired improvement.

That means your OKR will have to do with responsiveness to your customer or addressing the things your customer believes are associated with the poor no-show rate. Is your "patient reminder" system broken and failing to contact patients? Do you lack such a system? That's something you can address and control, so the OKR will have to do with that (server uptime, quality of service, features of the service, timeliness of changes to the service).


But you can potentially help the no show rate.

- Send an email to the person 2 days before their appointment

- Text them the morning of their appointment

- Build something into the CRM of the front desk that reminds them to call people the day before an appointment

- send a pre-generated google directions result to the patient N minutes before the appointment

I could go on for hours here. The job of a developer, at least in the startup world, is to understand the objective of their team and contribute to figuring out ways to reach their objective. It is not just to code up tickets that are put in front of them.


Interesting that this often doesn’t go both ways, in startup culture. I mean in terms of the non technical team members picking up enough technical skill to better understand the technical side.


I guess?

But I, as an engineer, don't know accounting, or how to setup a healthcare plan, or the intricacies of VC funding documents, etc. Thinking about how people use what you are building and how to make it better seems like table stakes for engineers in the startup world.

Thinking about the product and users is the job of engineers. So is coding a solution. And so is not coding a solution when there is a better option.

I guess if you get to a later stage startup, where you are working on very specific technical problems, you might be forgiven if you don't know what it means to the larger organization, but I can't imagine working in that environment and being happy. A fancy algorithm is cool I guess, but if I don't know how it's moving the business forward it's basically meaningless to me.

YMMV


Those are system requirements and are not what you do with OKRs.


Are they?

I would imagine the Objective is something like "Maximize the number of people we can help at our clinic". And a key result would be "The number of no-shows for appointments is below 5%".

It then follows that the product development teams (product and engineers) would get together and say something like "Great, we can think up 14 projects that might help us reduce the number of no shows. Here they are in order of easiest and/or highest likelihood to succeed to hardest and/or least likely to succeed".

How else would you use OKRs?


You build a mental model of the domain that guides the code you write. Often it amounts to "what would I want in that user's position?" aka the golden rule. As uninformed as that model may be it's better than none, and a place to start iterating on the model. In design it's usually better to be wrong than vague or random. And every line of code you write embeds a freight load of design choices, often as durable as concrete.


"Most design decisions you make as a developer affect the value users get from your product, and therefore the churn rate."

Yes but probably not in a way that matters.

Churn will be decided by a few small, hopefully well targeted issues (including price and switching cost) and so it's really Product Marketing/Managements job. The Eng. should be to meet the expectations of Product.


Leadership is hard. Gaining alignment around goals is hard. Even if you state a goal, getting timely execution is really hard. OKRs are great in that they are specific and measurable, so you can have a concrete conversation on if performance is adequate.

The problem with OKRs is goals can be hard to quantify, so it's attractive to simply make OKRs out of things you CAN measure. This is why we get companies that optimize for clicks at all costs, and twitter accounts that buy followers.

If you're thinking about backlog bankruptcy, it's probably because your team is considered to be struggling. If so, it's attractive to think the process of delivering on the OKR is the problem. However, in my experience the real problem is that leadership hasn't done a great job of defining OKRs that properly align teams on the mission at hand.

But good luck telling that to leadership.


Completely agree, this post looks classically to me like a team toiling under leadership that picked OKRs as the measurement system for employees but not for leadership.

How many OKR rollouts are plagued by lack of adoption in the C suites? Who here is shocked to find a system is malfunctioning because employees are trying their best to engage with it, but leadership can't or won't do their part? OKRs in particular are hugely reliant on leadership defining goals for others to align with.

Measurements and progress are cultural. It either starts and continues from the top, or it's defective.


OKRs just feel like waterfall with "metrics" slapped on to make it sound legitimate. The idea that I can predict what I should be working on for a full quarter, and know how to measure it in advance, is inane.

Been working under this system for 1.5 years and I think it's nothing but a detriment. I did research into doing OKRs "right", watched videos, really tried to give it a fair shake, too.


I look at them as a useful way to plan and estimate. It's okay if priorities shift or if you realise something will take longer, or that you actually need to work on something else right now and some KR will be pushed to next Q, etc. And it's perfectly fine not to meet some of your OKRs. I think of them as a compass rather than a paved road.


If it's ok to shift priorities 3 weeks into OKRs and push them to the next quarter why was it important to have them at all? Do you now need to measure the new work that's prioritized?

It's totally fine (and encouraged, in fact) to not pass every OKR, but I don't see any point to them. All of the failures of waterfall that everyone has always been aware of, but with the additional work of stating how you'll measure it, so that finance knows who to pay more.


If you have no idea where you're going, and no metrics at all, how do you reasonably pick any tasks?

So, clearly, you have some goal. It's likely also measurable. If you pick it so specific that you can't stick to it, yeah, OKRs are nonsensical. But "make money" or "increase conversion rate" or "find market fit" are clearly meaningful goals, no?

The difference from waterfall is that OKRs don't prescribe a "how". They describe a desired outcome. Guardrails within which you can be as agile as you want, just get some results.


As I said, look at them as a compass - a map will be a better analogy maybe - this is what I want to achieve at the point in time I'm making my plans, and here's how I'm going to achieve it. As long as that doesn't change, it's useful to break down a vague goal into milestones and evaluate on occasion (my team does it every 2 weeks) how you're progressing.

But it's okay for that to change, and then you just write new OKRs.


"Plans are useless but planning is indispensable."


So far, all companies I've seen that use OKRs operate something like this:

Start of quarter - Management: We need to increase X. Dev: Okay, here's how I'll break that down... (key results)

One month later - Management: Everything's changed... Y is the top priority now! Dev: Okay, that means we should focus on fozbuzzles.

One month later - Management: Oh, man, what we really need to focus on is Z! Dev: Alright, we can do that if we de-emphasize X and Y. Let's get Z done!

End of quarter - Management: You didn't meet any any of your OKRs other than Z! Dev: ...but you told me to drop X and Y...

Honestly, I've never seen an OKR that was relevant by the end of a quarter... We finish them or not, but the objectives are always wildly different every few months and priorities change.

Frankly, the start-of-quarter OKR system is incredibly disheartening... None of my major accomplishments or the primary focus of my work ever winds up being recorded, as only the items set at the first of the quarter are evaluated.

I've worked at other places (outside of tech) where the emphasis was on recording what you worked on and why at the _end_ of the quarter. In my experience, it's smoother for everyone. I'm a bit surprised at the focus on quarterly, pre-set OKRs in tech... It seems like a more adaptive process would be better for everyone.


> Honestly, I've never seen an OKR that was relevant by the end of a quarter... We finish them or not, but the objectives are always wildly different every few months and priorities change.

I think there are two types of priorities, "fires" and "nice to haves".

Fires are high priority, unpredictable, and often need to be solved quickly. This is things like "our databases crash every morning" or "our AWS bill is too high and going to put us out of business".

Nice to haves can still be needed, but they tend to take a back seat when a fire happens. Additionally, their priority tends to be much more subjective. This is things like "Our build process takes too long" or "our deploy process has too many manual step which leads to human error".

OKRs tend not to last a full quarter because any tasks/priorities that you can schedule and make fit nicely into a 3 month period are "nice to have"s. I'm not saying they don't matter, but the timeline to get them done doesn't really matter so things get delayed or moved around.

On the flip side, you can't schedule fires and you usually can't delay them either. The things that actually important enough that they can't get delayed tend not to fit the OKR framework.


As an engineer OKRs are a bit of a conflicting thing, and here's the reasons...

- I don't decide what projects come my way. Product or management decides that.

- My concerns are: "Is the company going to be around for > 6 months?" because if the answer is yes than my concerns are "In 6 months, make sure we don't hit a wall that will be a stop all development situation". I've seen it.

- OKRs are always going to be goals of the company. My question will always be: What can eng do to actually affect these OKRs? Are customers leaving due to bugs? Eng will prioritize bugs. Are customers leaving due to lack of features? Eng will build features. Is the company about to run out of money? Eng needs to build as much and as fast as possible and ignore any long-term problems because we need to get customers.

Basically either department heads need to take the current OKRs and transform them into department specific OKRs or they become meaningless. There needs to be engineering OKRs around things that engineering can affect. Not product OKRs which product will affect. Not sales OKRs which sales affects.

Example of a product OKR:

- increase customer conversion rates by 0.5% every 2 weeks for the next 2 months.

Example of an engineering OKR:

- enable product to A/B test

- enable product to measure more data points to make decisions to reach their OKRs


I've only seen this in large companies, not startups or software shops, but it was never a single layer of OKRs. It was a set at each management level.

The CEO would define the top-level OKRs. Their direct reports would ask themselves, "How can I contribute?", and build a more detail set of their organizations OKRs, that rolled up to support the CEOs. Every layer of management down the chain did the same. And the individual contributors set personal goals that supported their direct manager's OKRs.

No system is perfect, and this one had its annoyances. But it did tie everything from personal goals all the way up to the top-level corporate goals, which resolves many of the concerns from the article.


This hasn't been the case in my experience at a 200,000 person company. The OKR's I hear about are so far separated from what my teams work on, that it is impossible to even pretend our work relates to it.


The trick here is: what happens if the CEO is unwilling to pin herself down to specific goals, for any reason, but especially because she has no goals or is afraid to be seen failing those goals?

Either leadership doesnt engage with OKRs, tasks are deliberately misinterpreted as goals, or unmeasurable goals are set. The rest of the company's OKRs then follow this example, and everyone suffers.


In such a case, you have bigger problems than OKRs.


We started using OKRs this past year. I can't tell you what our current OKRs are. We even create "sub-OKRs" that align with the higher level OKRs. It feels like a pointless exercise, because we do what the author describes: slap labels on items in our backlog that sorta-kinda relate to an OKR.

The author's suggestions aren't terribly complete or practical. I don't know many teams who could just dump their backlog every quarter. You're going to have work that can't be related to OKRs. We do use the OKRs to drive our weekly meetings and sprint planning.

One suggestion I have is just to accept all work isn't OKR related and reserve capacity for that work. If some PM or manager complains, push back.


This is exactly what we do. We reserve about 25% of every team's capacity for "other" stuff, e.g. maintenance. The PMs/leaders have to acknowledge for that when they compile their OKRs.


This just means someone somewhere has an unstated objective that things be maintained. Make them write it down, and then your 25% capacity supports that.


The O is clear. What’s the KR for you? (Driving a KPI from one number of another)


We've been doing OKRs at my employer for a while. It's always really difficult for the development team to come up with ideas because we know that there's a higher than average chance that we'll get pulled to work on something that's unrelated to the OKRs we've come up with; and while everyone says we're not being judged on our OKRs, _someone_ is keeping track, otherwise they're not really useful.

I'll be reading and re-reading this article for a while.


I work at at company that does OKRs as well. I think I've viewed both the symptoms you described and the common pattern described in the article of fitting an existing backlog to OKRs.

I think one observation that I have is that OKRs are really a framework to help teams understand what direction/goal to head to and understand progress toward that goal. And really, this mindset needs to be adopted not just by the development teams-- it needs rigor and consistency from leadership as well.

For example, the leaders are accountable for setting direction and describing what they think is important. If they then start asking you for things that aren't tied with an OKR, it's a perfectly fair question to ask leadership "Why are you asking me to work on this when you indicated it's not important to our company strategy?" If the team feels like they're going to be working on something unrelated to their OKRs, that's symptomatic of leadership not sending a consistent message on strategy or prioritization.

I've also seen it from the other side as described in the article. I've seen development teams really struggle to initially understand OKRs and the value. As mentioned earlier, it's really designed to help clarify direction and progress-- if a team is ignoring the OKR and fitting their backlog, that's symptomatic that they're really not trying to understand the direction leadership wants to head.

What I like about the author's suggestion is that it really forces the team to understand the problem the organization is trying to solve and think up solutions how to achieve it. Backlog bankruptcy is one way to do it, though there are likely some items that can still solve the problems outlined in the OKR. Those items just shouldn't automatically be transferred over without verifying they solve a problem that needs to be solved.

Teams shouldn't be fitting problems to work-- teams should be fitting work to problems.


This seems like there isn't buy-in to the OKRs across your employer. My company does OKRs and it's taken seriously by VPs as well as the devs. Here, it's very reasonable to say no to something (and have that be respected) because it's not contributing to your team's OKRs. If it's a VP or something and insists it needs to be done then that means the OKRs need to be updated to reflect the nature of this work.


Thanks for another example of how OKRs aren't working for devs. A part of the inspiration to write this was that I noticed most developers feel that something is wrong with the OKR process, but they can't put their finger on exactly what.


Am on a small dev team in a fast moving startup. To me, one of the O is to measure distractions and KR being

- having the amount of time spend on planned work vs adhoc work - source of distraction being measured (cross team requests, adhoc meeting)


I write software that is used entirely internally by the company. We are building something completely new from scratch that is not yet live. OKRs were recently introduced at our company. The only OKR I can even think of is "get this thing functioning and in production". Just a boolean. I wanted to come up with something measurable, but what is there to measure?

I don't understand how OKRs are appropriate for employees without the power to make business decisions. My only goals are to come into work, do what is expected of me, get paid, not get fired, and go home. I do not have power to decide anything with quantifiable results. If I did, writing OKRs and working towards them would be extremely easy.

If I did have any power to make business decisions one of the first things I would do is make any employee without any power exempt from having to even think about stupid OKRs.


Break it down into milestones. What's the minimum piece you could complete that would be usable by someone (even just a beta user to gather feedback)? What's the next incremental piece that would be useful after that?

You can measure milestones completed, and you can also apply some type of scoring to the feedback you collect from your beta users. If your beta user feedback is "This is terrible, it doesn't do the key thing we need to do for our jobs!", that's actually a great outcome, because you can course-correct early, rather than having to revisit 1+ years of eng work to address the feedback.

> My only goals are to come into work, do what is expected of me, get paid, not get fired, and go home.

OKRs should just be the process of you writing down "what is expected of me" in a somewhat structured and measurable way. Team OKRs of "Hit milestones #1, #2, and #3 in development roadmap of system X" are fine, and are pretty common for greenfield projects. The main key is to have a "definition of done" for each milestone - does the milestone include documentation? Monitoring? Who decides that work is complete?


Some ideas of what you can measure...

Software is usually written to help reduce effort. Measure effort with and without the software. Reduce effort usually means better bottom line for the company which give you a great positive for your next performance review.

Meaure the usual software engineering metrics, ie defect count, defect closed, velocity, delivery timing etc.

I think that everything that we do in a company has a cost for the company. At the most basic level, you and I cost money as the company is paying us money for the work we do. You need to think from that perspective.


My recommendation is to have a lot of meetings to create PowerPoint slides with some metrics that may be remotely related to what you are doing :-)


Would it kill people to expand the acronym on first use?

For example: "Most of the companies I’ve worked for in the last 5 years have used the objectives and keys results (OKR) system."

I'm not sure why this common practice in text is seemingly disappearing on the web.


There's even a tag for it. Not that anyone cares about what HTML can do anymore.


My company has been using OKRs for several years now and the author perfectly describes what the engineering team does. We scan through our backlog, slap some nice OKR tags on items that are vaguely relevant, and then ignore the OKRs until the next quarter. The more politically savvy members of the team then spin some BS to make everyone buy that the work engineering is doing actually supports the OKRs.

My personal opinions of OKRs are that it's possibly the most cumbersome form of waterfall without many of the few benefits of waterfall. It's really a tool for making leadership feel good about being bad at their job.


I agree. OKRs are renamed MBOs. This is extrinsic motivation and a giant waste of time. At best, it’s a process-heavy priorities setting/strategizing mechanism. At worst, it causes unnecessary employee anxiety and distraction away from current business needs.

Any measurement that is a metric (goals in OKRs) will cease being a metric. It will be gamed.


MBO and OKRs are not (supposed to be) the same methodology. Their proponents decades back were of opposing views on how to manage.


OKRs could drive some value BUT in my experience they are hard to maintain and update. Most companies keep their OKRs in huge spreadsheets, some even have dedicated people to maintain that insanity.

Teams then work with different tools, e.g: Marketing use Trello, Eng use Jira, you name it and now you have to trace back tasks from different tools to some OKRs defined somewhere in the cloud ... good luck with that.

It's nearly impossible to measure and reconcile progress made on team/sprint basis with OKRs therefore you just shovel random numbers in the spreadsheet and management is happy.

When an OKRs is updated good luck cascading the changes bottom <=> up .

I have yet to find a company that has implemented OKRs in an effective way.

Most of the time managers themselves tell you : "just write something .. it does not really matter" hahahah and there you go ... At the end of the day people have to do OKRs but they keep asking themselves ... why ?


Shameless plug, but check out my tool for managing OKRs outside of spreadsheets if this is something you're looking for https://simpleokr.com


Do you have a companion tool whose purpose is to convince Excel devotees that modern, domain-specific tooling exists, and that they should stop pushing Excel because it's terrible for many tasks?

My problem is less that no tools exist to manage things outside of spreadsheets, but more that older management tends to take the position of "Jira (or whatever) is complicated and I already know Excel, and I'm in charge so we're using whatever doesn't require me to learn something new".


The problem with domain specific tools for project management is that everyone has slightly different goal, priorities and requirements. So you end up with a mess of a tool that barely works (Jira) or one that's missing necessary features. Then someone decides they could do it faster and better in Excel.


we've been looking at gtmhub.com for a minute, are you considering a Jira integration?


OKRs are excellent-- I manage a repo about OKRs where you can see examples and contribute ides. The article author is exactly right: use OKRs to drive your weekly team meetings.

https://github.com/joelparkerhenderson/objectives_and_key_re...


From the Google examples:

> a team is encouraged to set as goals about 50% more tasks than they are likely to actually accomplish.

> If a team scores significantly higher than that, they are encouraged to set more ambitious OKRs for the next quarter.

I lived through this in Scrum sprints. There is a baked in incentive to sacrifice some iterations every once in a while to ajust the average expectation, while keeping an overall positive look.

Otherwise without having used it seems to me that OKRs are another rather plain tool that works for clever organisations but fails when applied dumbly. Is there anything specific to it that makes worst case scenarii better ?


Seems this tends to go exactly the way Scrum and Agile did. If done correctly by the whole company it works well. But in most cases it will probably be a mandate by top management and the rest has to come up with some nonsense metrics and objectives and just play a game.

I think more and more any methodology will work if followed honestly and adapted to real world problems. In the end it's always the disconnect between what an organization claims to do vs what it really does. If that disconnect is small things are good, but in a lot of cases the disconnect is big.


Metric-obsessed management systems often miss on quality.


Or they invent a lot of bullshit metrics because in reality a lot of work is very hard to quantify.


The one place I worked that tried to do this, the developers mostly ended up doing things easy to measure. Turns out it's super-easy to measure views and shares so content marketing it is! Not exactly a good use of our particular abilities but it seemed to make everyone (who was pushing the OKRs) happy. Our real work could rarely be connected to any kind of OKR. Too hard to measure and, "this OKR thing seems cool, we should gather data for at least 2 years on X, Y, and Z for a baseline averaged across projects so we can create OKRs to improve them" didn't fly, obviously. So. Youtube videos and blog posts it was.


If quality is important (it isn't always), then it should be one of the OKRs!


OKRs do force you to put down your company's priorities and metrics to measure their success. Unable to do that usually is a symptom of being too tactical and reactive and not having a strong strategy/vision.

Company-level OKRs are only as good as your leadership team's clarity on strategy and long-term direction and conviction to largely stay the course for at least 3 month chunks.

If you feel your company OKRs are bad or you are unable to connect with it, it is because your leadership team has not done the necessary homework to define and socialize it well.

If you agree your company OKRs make sense but you are unable to connect it to your work, think of it this way:

For a functional feature engineer, the team OKRs should clearly and directly connect the features they are working on to the company OKRs. If not, then you need to engage with your product managers to achieve that clarity.

For a senior platform engineer responsible for evolution of tech platforms, it is critical to have a good sense of the direction in which business will evolve and expand. (Annual OKRs and strategy articulations are crucial for this). From this, you should be able to draw out a mind map of the kinds of features and capabilities needed by your platform. Then you can articulate this to your team to build those required enhancements to the platform while also ensuring immediate feature building activity is moving as productively as possible.

If you are unable to connect the dots between the business OKRs and tech platform OKRs, it is usually a sign that there isn't a good functional model for your business domain and your tech platform isn't really a platform that supports that functional model. For architects, this should be the most important deep work – to keep the functional model of the business and that of tech platforms in sync. Without this common model, teams cannot collaborate effectively.


> I think it gives leadership the justification they need to declare that everyone is working towards the same goals, but I don’t think it leads to dev teams actually feeling like that’s true.

The more important thing is whether it's actually true, not whether dev teams (or leadership) feels it's true, no?

But the OP indeed seems to describe an environment where it is not actually true either.

Clearly, thought and planning is needed on how people work, in a way that will be directed to actually addressing OKR's. The OP offers suggestions about abandoning backlogs, and focusing weekly meetings on OKR's, which seem possibly beneficial, but probably not sufficient. I think it probably requires more fundamental shifts in how the whole organization works, which are a lot harder than just publisizing OKR's.


If the items in the backlog aren't related to the OKRs, then it's possible that they should be thrown our with the new cycle like the author says, but I wonder if it could mean that the OKRs need to be changed in certain situations as well. If there are things the teams value in the backlog that aren't part of any OKRs, and the dev teams are in some sense closer to the product than upper management, then that input should be filtered back up to be considered by upper management since the teams "on the ground" might be seeing information that the management doesn't have.


In many companies, engineering puts out fires, prevents disasters, gets rid of tech debt, etc. In short, engineering does a lot of maintenance that is needed to keep the entire company afloat. If that work stops, the company will go bust.

It's such an obvious key objective, it should be in every top-level OKR, yet it almost never is, which means that there's nowhere the engineering department can slot in that work. And how do you measure it?


Great point. Could definitely point to leadership being out of touch with what really needs done.


OKRs are merely a mechanism for generating public commitment to goals. You get to have a bit of a say what those goals are, but fundamentally they're about holding your feet to the fire after you spent a quarter bullshitting on Reddit because nobody in the organization has a foggiest clue about what needs to be done.

Understand OKRs for what they are, and you'll be a much happier developer. Don't treat them too seriously, use them as a productivity mechanism _for yourself_. Now, even if nobody knows what needs to be done, you can refer to your (approved) OKRs and do _that_, whether it makes sense a month after the quarter started or not.

I don't know how it is now, but at e.g. Microsoft in the 00's you'd set your goals once a year. A month later those goals were completely irrelevant to what actually needs to be done. And Microsoft is objectively one of the most successful software companies in the world. Waterfall, agile, OKRs or yearly goals, it doesn't matter. The reality is always dictated by circumstances. What matters is that every single thing I worked on while there makes money now, and a couple make billions a year.


Man I wish I had read this seven years ago! We tried doing OKRs on my team and it never seemed to work. We ended up just throwing it all away at the end of the quarter and listing what we actually accomplished.

And I think it was because we were doing what the author says and shoehorning in our backlog into the OKRs!

I was really soured on OKRs because of it. Now I want to try again doing it “the right way”.


Would love to hear if you're able to make it work by adjusting the details of how it's carried out.


This is interesting. I wonder how much of a depressing effect throwing away the backlog every time will have on contributions from people on the team who have good ideas but are not super vocal, e.g. "what's the point of saying my ideas when they'll just be thrown out next time we do planning?"


I think the hope is that management looks at the backlog and uses that to guide their OKR creation so that there is a meeting in the middle of the upper level OKRs and the backlog.


In our use of OKR, we was concerned about this as well. Having to use mental gymnastics to make things work is a bad sign.

The way we thought about it was that OKR should not dictate 100% of effort. OKR is about prioritizing the changes that leadership wants to make at the productivity frontier of the organization. OKRs alone are not enough to direct company efforts. Leadership also needs to specify what % of energy should be spent on OKR vs other responsibilities. The OKR effort goal could be between 0-100 for different teams, or the company as a whole.

For instance, a company that needs to pivot or will die is probably close to 100% OKR effort on all teams.

This approach gets around the need to assign OKR to every activity and allows OKR to be more salient and less diluted as a tool for alignment and goal setting.


Making decisions based off an any metric is very dangerous and the downfall of many products.

You MUST have a deep understanding of exactly what that metric is measuring and all of the things it doesn't take into a account.

For a concrete simple example. An A/B test may sure that forcing users to create an account before seeing shipping charges increases order completion by 10%. But then it turns out you've reduced return visits to the site by 30% and decreased overall conversion rates due to damaging that funnel.

Reality is very complex. Trying to boil everything down to a few numbers to make decisions based on seems like it simplifies things but often times you are just ignoring the complexities and flying blind by chasing metrics.


I saw the creator of OKRs, Andy Grove, speak years ago out at Intel in Oregon when I was doing an internship.

I don't recall the product development group I was in using them. Maybe they were implemented at a much higher level in the company.


I’m so sick of the software industry expecting developers to do the work of the entrepreneurs for them without any of the upside.

Why the fuck should I help the owner get rich by identifying how to grow their business by increasing conversion, retention, etc.? I don’t benefit at all.

At my current company we have weekly meetings where the owners tell us all the business metrics we should be driving and ask for ideas on how to increase them. I’ve learned to say “I don’t have any specific ideas right now.” because it’s insulting they expect me to do their work and get nothing for it.


OKR at an individual level work fine, but they need to be positioned as an individual OKR.

For example, reduce bug count report by 10%. Or, increase LOC by 10% without reducing quality (defined as bugs, DRY, PR comments), increase documentation written by 10%, increase Slack karma by 10%, etc

Obviously other metrics other than the OKR need to be maintained.

PRoduct owner is responsible for OKRs that scope across stories implemented, not ICs. Or at least not junior ICs. Architect level IC for example should be able to complete more broad scope.


Is there no limit to the bullshit dopey “management consultants” will come up with an acronym for? “OKR’s”? Oh, do you mean creating some objective and then working towards it as a team? Measuring progress through some empirical metric? You know, the same process humans have engaged in since the dawn of our species? What a load of garbage.


If you're into OKRs, a friend of mine built a tool open to all: https://general-internet.zendesk.com/hc/en-us/articles/36001...

Interested in feedback.


OKRs seem like a fancy new version of KPIs.

My single biggest issue with KPIs is that, for the most part, in a large company, one (or even several) metric(s) cannot cover the whole story.

They are useful for very specific targets.

I’ve ended up working towards KPIs which I’ve known have zero relevance to improving service because they are (a) poorly defined (b) irrelevant to the actual problem (c ) force a metric to exist where no sensible metric can.

Unfortunately this tended to happen more often than not.

OKRs have their place. But they don’t sound like something new (KPI rehash) and are wide open to misuse, just like KPIs.


Hi Dijksterhuis,

Some organizations do indeed move from KPIs to OKRs. I personally think they are different tools for different purposes, and they could work well together.

I believe KPIs are a great tool to define and monitor your business as usual, whereas OKRs is more about realizing your ambitions and pushing the company further ahead.

If it helps, more info here: https://www.perdoo.com/blog/kpis-okrs-the-goals-that-drive-b...

Best, HJ


> Where OKRs become less useful is within the decision making process for an individual team.

> Joe Cannatti

Anyone else put off by the author quoting himself?


No mention of Key Performance Indicators? Why?


MBO and KPIs are different systems stemming from different philosophies about how to frame objectives and how to measure what matters.


Everyone is too old to remember that this is what we called OKRs before? I forget what they were called before that..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: