Hacker News new | past | comments | ask | show | jobs | submit login
Measuring an engineering organization (lethain.com)
141 points by theptip on Jan 3, 2023 | hide | past | favorite | 69 comments



What should we measure to measure productivity of journalists, of novel writers, of pop song composers? How do you measure the output of a engineer in a clothing factory or a short run machine tool shop?

"Management" needs to stop thinking that software is an assembly line and that they are managing "workers". The actual workers are CPUs and GPUs - and coders "manage" those - coders are the new managers.

The bullshit jobs people worry about are those managers who think they are arranging and orchestrating coders and if they shuffle them faster they will get better results.

As more and more of the real world becomes eaten by software, measure that.

I like the comparison to a engineer in a factory.

In 1960 how did we measure the impact of an Detroit factory engineer? "Did the time to turn around the factory line drop this quarter?" That is a high level, impactful metric that has many inputs and would be a "managerial or executive level" metric with bonus attached

Today that same metric works for "has the turnaround time for a new release of the data pipeline improved this quarter"

Those are the metrics that will matter. Measure those.


> What should we measure to measure productivity of journalists?

Visits, clicks, read time.

> Novel writers?

Sales, reviews.

> Pop song composers?

Hits.

> "Management" needs to stop thinking that software is an assembly line and that they are managing "workers"

I can't avoid to find funny the thought of telling management, as an engineer, what they should do.

---

Good software engineers should embrace good productivity metrics (like the ones presented) as a meritocratic path to success.

I find there are 2 distinct set of metrics for engineers: ability to delivery, and to create value. Both important, both difficult to measure, both possible to approximate, and both benefit the business.


I am pretty sure we have learnt that visits / clicks or "engagement" are metrics that lead to places we don't like.

Metrics don't exist in a vacuum - they not only tell us where we are, they set guides to directions we want to take.

Also required quote:

"Some one asked me " Bert Bacharach, you have written so many hit songs, what is the secret to writing a hit pop song. I said If I knew that why would I do anything other than write hit pop songs?"


> Visits, clicks, read time.

The fact we measure this way has destroyed journalism as a profession that writes important and accurate stories that inform the public.

It also reveals the problem that a locally optimizing for something can lead to reduced global levels of that thing. E.g. a journalist on a site making click bait might increase the views on his article while also turning people off the site as a whole, destroying it.


> > What should we measure to measure productivity of journalists?

> Visits, clicks, read time.

> > Novel writers?

> Sales, reviews.

> > Pop song composers?

> Hits.

These are all proxies for revenue. So, you could just measure revenue.

Then you fall into the trap of producing mediocre creative work to try to maximize revenue.


If you're coding for someone else, rather than coding for yourself, why is this a trap?

It might be satisfying to deliver a hyper-optimized end product, using optimal algorithms everywhere, with everything tuned so that it delivers the maximum performance with the minimum resource usage. But is that what your customer is paying for? Or are they paying for a website that works well, and is operational at the end of the month?

Not everything has to be a work of high art. Pop art is fine too.


For one thing, total revenue over some longer time period is probably the actual thing to maximize. Short-term revenue may not be the best predictor of that, but we still need short-term metrics to evaluate performance.


Generate impact, deliver value, get shit done(tm), three core metrics we all know and yet the antithesis of modern Agile methodology which requires the above to be somehow broken down further into discrete compute units as measured by the Master.

Like rating a 5-star chef on individual eggs cracked per week rather than gourmet distributed systems delivered and Paxos hungers sated.


It'll surprise you, but 5-star chefs do get rated on individual eggs cracked per week. Because that directly translates into "meals served", which means utilization.

And every single one of those eggs gets evaluated independently - because as a 5-star chef, you're only as good as the last meal you made. A bad review is not a good life event to experience.

If you want to get rated on "got some shit cooked", you might want to try working at the griddle in a local diner instead. (Although... throughput still matters)


>>> I can't avoid to find funny the thought of telling management, as an engineer, what they should do.

Think of it this way - it's fine to "tell your elected representative what to do" - perhaps we should look at executives as an elected function


One should keep in mind that the word “meritocracy” was mainly popularized by people warning about what a bad idea it is. This seems to have just lead to people hearing the word and immediately implementing it.


My perception of the word "meritocracy" is that it's usually used by an autocrat to describe his autocracy. A head of a hedge fund or start-up business might say "Come work for me, you will find it so rewarding, I've created a true meritocracy here." Inevitably that raises the question "Who decides what's meritorious?" The unspoken answer: "I do."


Meritocracy’s like most orgs usually have secret salaries, so you cannot verify that they really are Meritocracies.


Well x-cracies are about power, not about money. I believe there are genuine differences in the extent to which organizations are transparent, both about power and about money. But even with full transparency, what is there to verify if there is no way to define "merit" that everyone in the org can agree upon? That lack of agreement gives latitude to the autocrat to use the term "merit" to mean whatever suits him this week.


We need less people who measure things at our companies and more engineers who are actually designing and implementing the technology.


"What should we measure to improve developer productivity?" is a decades-old problem for leaders with no clear solution.

There finally seems to be some level of consensus that output metrics like lines of code, # of PRs, and commits, are an ineffective approach.

Lean metrics like cycle time and lead time can be a helpful high-level diagnostic, but they're far from an indicator of effectiveness or productivity.

A new approach being adopted by many organizations is to focus on the actual experiences of developers... the things that slow them down or frustrate them... and turn these into measurements that guide improvement. I'm the founder of getdx.com where we're publishing research on this: http://paper.getdx.com


> "What should we measure to improve developer productivity?"

How about nothing? Honest question. I am sick and tired of being treated like cattle. I've never seen a single convincing argument that any metric of my outputs has mapped to some productivity metric in a meaningful way, but I have absolutely seen them weaponized against me and others to punish and fire engineers. All of these metrics have only ever boiled down to a negative reinforcement mechanism, in my experience.


I'm a developer and am right there with you. But if you're a decades-old corporation with 10,000 engineers, you need some set of signals to help guide improvements to tools and processes, right? This benefits developers, and there should be a set of signals to enable this.


You need objective, measurable signals if you are concerned that developers might be confused or lying. Otherwise you can just ask us. Most people are happy to tell you what the pain points are in their workflow and how they compare to last year's. Maybe you need metrics to quantify specific complaints like "slow" or "crashes a lot." But I feel like most organizations doing this kind of measurement are looking for some kind of Freakonomics "developers think they want X, but what makes them better is actually Y" when they haven't even bothered to ask about, let alone implement, X.


> You need objective, measurable signals if you are concerned that developers might be confused or lying. Otherwise you can just ask us

In an organization with hundreds or thousands of developers, there will be people either lying about how productive they are or genuinely think they are performing above average when they are not. It's like how 80% of people think they're above average drivers.

On the other hand, you may have excellent developers that are overly modest or not loud enough to sell themselves. They are truly exceptional and above the curve and should be rewarded for that. Some of the 20% of drivers that don't think they are above average may actually be above average.

Objective, measurable signals help to find the outliers at the ends of the curve that may otherwise be missed.


It doesn't matter what percentile developer you are to guide improvements to tools and processes. It matters what slows you down.


In healthy orgs you are collecting such metrics to help advocate for investing in tooling, automation, infrastructure, process improvements etc. It can give the business case for building/expanding dev tooling and cloud teams. I’ve found it helpful in past jobs… fully weighted dev salaries are so high, at a moderate sized org a little bit of rough efficiency math can make for obvious investment cases.


> In healthy orgs...

This is, so far as I can tell, ~5% of them.


So how do you deal with bad developers? How do you even measure if they exist in your organization.

And as much as many of us on HN are developers and like to toot our own horn, some of us suck and don't get better with time. I mean in small organizations this is typically easy to figure out, but in larger organizations it's a problem that can persist for long periods of time.


> So how do you deal with bad developers?

The same way we deal with good developers? With managers, of course. That's the whole point of their role. Every developer productivity metric I've seen speaks to some disparity between engineering and the C-suite. I don't understand where these products originate that seek to do the manager's job for them but worse. I've been in management roles before, the numbers don't and can't always tell you the full story, no matter the size of the team or the quality of the data.

As always, good people are hard to find.


What if the manager is bad? Metrics can help other people identify that a bad manager may not be identifying very good developers.


If the manager is bad, we should address that by discussing metrics to measure manager success and identify three bad managers, work with them on improving and fire them off it doesn't help.


> So how do you deal with bad developers? How do you even measure if they exist in your organization.

Ask the developers that work with them? Who likes working with someone that isn't pulling their own weight, or worse, that is just making your own work harder?


Who cares? No company I've ever heard of has been killed by bad developers. The same can't be said for managers, Vice Presidents or the C-Suite.


The big problem with this line of reasoning is that good developers hate working with bad developers. So if you don't do something about bad developers (which might just mean train them, not necessarily fire them), then the good developers will leave.


In that case, the bad developers are easy to spot. The good developers point right at them. There's no mystery nor are measurements needed.


That is circle reasoning, if you know which developers are good so you can ask them then you already know which developers are bad.

And asking good developers to identify good developers doesn't work either. Good developers hates working with bad developers, true, but like all other humans they hate working with all sorts of people for different reason so they will point out a lot of good developers as well.

Instead what works is if you have lots of data from diverse group of good developers. That way the personal noise gets cancelled out and you are mostly left with the "I don't like bad developers" signal. This isn't trivial data to get though, especially since people usually choose to work with people like themselves so a team likely isn't diverse enough to get you this kind of data, and people from other teams aren't familiar enough to give good data either.


You hire 5 developers.

3 of them point at 2 of them and say they are bad.

You fire 2 developers.

Nothing gets done. Turns out the 2 were the good developers.

Congratulations, you succeeded at making another bad metric.


That's just bad statistical analysis.


This is a tautology if I've ever heard one.

If a bad manager keeps only bad developers around it will kill a (software) company just as fast as anything else. It turns out that paying customers like their software to work and be free of data corruption.


I disagree. Under good management, bad developers might progress more slowly than good ones, but they won't outright destroy the product.


Speed == probability of survival for startups. You only have so much runway; better developer productivity means more swings of the bat at finding PMF.


That isn’t what is in contention, though. The mapping of some objective metric to that speed and productivity is what I’m arguing doesn’t make sense.


What we are arguing is the Chinese room problem coupled with Goodhearts law.

The people that need to interpret the metrics might be a professional that understand their meaning, or they might be an AI following arbitrary instructions. The first group will use the metrics to create a better team, and the second will make people angry.


When you lack good developers stuff like this happens:

https://www.cbc.ca/news/business/sony-cyberpunk-2077-playsta...

Edit with more arguments:

You could of course blame managers for that, but delaying a game since your bad developers aren't capable of delivering isn't really an option either. How many years will it take those developers to get things right? What they would need is to delay the game and replace the developers with good developers, or scale back the complexity of the game to a level those developers can handle.

The company in question choose the latter, they said they will use unreal engine for future games instead of an inhouse engine. So they admitted their developers aren't good enough.


> "So they admitted their developers aren't good enough."

Build vs. buy is pretty much never a question of developer quality, but a question of ROI. Even with the best devs in the world, building something on par with Unreal (or other commercial engines) is a huge outlay of capital. With relatively little return over a single game, you need multiple hits to pay off the engine.

There aren't a lot of companies that can afford that.

Here's a decent write-up of some of the major issues that actually plagued that project, as opposed to "bad devs": https://blog.devgenius.io/cyberpunk-2077-through-the-lens-of...


> Here's a decent write-up of some of the major issues that actually plagued that project, as opposed to "bad devs": https://blog.devgenius.io/cyberpunk-2077-through-the-lens-of...

All of those issues were there for their last project as well, but somehow they only failed this time even though they had a much larger budget and larger team with more developers. The most reasonable explanation is "bad devs", even though that will never be an official reason. When you swell the team size quality tend to take a hit, and when the old good devs becomes managers and the new devs try to replicate the work you often get problems.

That article describes how you need to work if you have bad developers. CDP-R assumed that they still had as good developers as they had previously, which wasn't true and that bit them.


"scale" is a problem. It is not the same problem as "bad devs".

All the indicators point to "scale". Having worked in games for almost two decades and having seen anything from solo dev to teams of hundreds, and the growth in between, my experience leads me to say "scale" as well.

Do you have any indicators for the "bad devs" problem? So far, you're just stating it as if it were categorical truth.


Paying a non-trivial portion of your engineers high salaries will kill any startup without unlimited money, regardless of what other guardrails you have in place.


IMO some ~objective feedback of reality is good though. Probably not easy to set up without having it quickly turn into a negative reinforcement loop indeed. But let's not forget that, with the right supporting culture, it can be helpful (pleasant?) to the dev to have some metrics. I admit "how to do that?" remains unclear.


But if you don't measure them—or if, god forbid, you give them their own offices with doors—they might start to think they're in the professional/upper-middle class, not the middle class!


This doesn’t work, because then your company would have to measure your value by the value you deliver. That would probably mean having to compensate people like you way better, because software engineers can create immense value. Of course this can’t happen, as we apparently live in a universe where only management have bonus based compensation.


Who’s we? FAANG engineers have bonus and stock compensation.


“the things that slow them down or frustrate them”

Whenever management makes a new attempt at getting more precise estimates I always tell them the only way to get things done is by doing them. So the number one goal should be to remove obstacles, simplify processes and make sure tools work to our benefit. For some reason nothing much ever changes and the question for better estimates returns. It’s pretty weird.


Anything that is routinely easy to estimate can probably be automated away. And so will your competition.


Measuring gets a bad rap, but proper measuring can be very valuable to engineers as well.

If the company is only trying to measure developer productivity, they're missing the big picture. Developers will only get blamed when the number doesn't match some hidden expectation.

Good organizations measure a lot of things, such as tech support effort (mentioned in this article), roadmap churn, number of unexpected feature demands that derail forecasts, and other metrics that evaluate the company rather than laying blame on a single group.

Measuring roadmap churn and the impact of unexpected feature requests has been very helpful for me at metrics-heavy companies. When management starts asking why things haven't been shipped yet, it's amazing to pull out a quantified list of all of their unplanned requests, roadmap changes, and other management issues that set everyone back.


I think a lot of the bad rap comes from how the measurements are used - I personally like to know my trends for how many hours/commits/etc I did in different areas during a time period. I use Wakatime[1] in my editor and love checking in every once in a while. However I would never ask everyone on my team to install it and share the dashboards, because then the measurements may be used the wrong way, e.g. "Alice did way more commits than Bob last week, so that's bad for Bob".

As another example I like using story points/velocity to get a feel for the team, but I don't think you can extrapolate on that data and expect the results to be valid. I think most of the frustration from devs come from the measurements being used to try to plan out the future when that isn't really possible. IMO you can choose a set of features OR a release date but definitely not both.

I'm no expert but I've read a fair amount on different strategies to try and make timelines for software and my big take-away at this point is "don't make timelines for software. If you _have_ to, then keep it probabalistic and don't promise anything by an exact date". I think we as devs need to kinda educate the rest of the business that it's a tradeoff between features and release dates. I like the "iron triangle" model but you can't just add more people to a team and expect things to be done faster (Mythical Man-Month), so it's really just scope VS release date.

[1]: https://wakatime.com


They help in the same way that a speed limit helps drivers. Sure, it actually makes the roads safer to some extent. Is there some amount of abuse you have to expect in enforcement? Yes. Is there some amount of unjust rule breaking that will happen? Also yes. Does it massively reduce the fun of driving? Also yes. If everyone just knew what they were doing on the road would it really be necessary? Not really. And do you need them for small, high-trust areas? Not really.


> And do you need them for small, high-trust areas?

OSHA would disagree, and even racetracks have rules.

I think a lot of developers on HN just have a problem dealing with the fact the rules are going to be applied to them. If you become an owner of a company your engineering rules go away, but then you have all the regulatory fun of a business owner trying to make a living and selling things at the same time. Working for a larger organization tends to have benefits like steady pay, but in return they ask for you to justify your expense and it's very possible the people asking for justification have no idea exactly what you do.


OSHA is not the ultimate authority, it's just one country's solution to the problem.

I don't have a problem with rules being applied to me, I just see it as dysfunction that any organization gets to the point that they need a significant set of rules. The more rules, the worse the dysfunction, typically. Exceptions include life-critical applications, where the entire existence of the company hinges on stripping human error from the product. But even then, more rules ≠ less problems.

This even works introspectively; it's a bit like fitness tracking. Is a little bit of data helpful? Yes. But who is out there reading into every time they got sick and missed a few thousand steps, or thinking about every utterance and flatulence during sleep and how it signals an overall moral failing needing to be addressed? People with mental disorders, that's who.


Somewhat tangential, but we have the already have the tech to know when license plates show up at a certain place, we know the map and the speed limits, so we could completely automate tickets for speeding. If the limit is 60mph and you show up 10 road-miles away in less than 6 minutes then you must have sped somewhere along the way, then the owner of the car gets a ticket in the mail.

In the short-term I think most people would hate this, but then I think we need to choose between raising the limit or following it. We all make the laws so we can raise the limits, and this approach completely removes the potential for abuse or playing favorites. As an added benefit it wipes out a lot of what cops spend their time on, so if you don't like cops we could justify having way less, or if you do like them then it frees up their time to focus on more important things.

I also recognize this opens up a huge potential for abuse by the govt having all this data, but we already have it for tolls so we might as well use it for good - having human police officers handle stuff like this is a huge risk for abuse and a waste of time.


That would still be unjust, since there can be heavy traffic at the end which renders a speeder having driven, on average, within the speed limit. Bottom line though is that if you get too draconian with the enforcement, people will revolt. There is definitely such a thing as too much structure.. that's arguably the whole reason the US was created.


> We all make the laws so we can raise the limits

How democratic is your country? Will the limits raise or will your government take the extra revenue and keep things unreasonable?

You will find that people have many different answers for those.


Could you share some specific tactics/metrics you've defined to measure things like roadmap churn or unexpected feature demands?

This problem definitely resonates with me, and I have some very primitive ideas of tracking (i.e. tagging tickets that fall into these buckets), but I would love to learn from your experience if you're willing to share!


Heh, I like how you were downvoted. I think a number of people here are a little bitter about their own negative experiences in companies and don't have enough self reflection to admit it's possible they could be part of the problem.


You need to have a vision, then measure time of being blocked. If you're blocked, you're not progressing towards the vision. If you have a team of sufficiently competent people, you should expect to achieve your vision eventually.

Problem of many organizations is they are progressing without a vision and have no idea when progress gets blocked. Since nobody really measures blockage, workers typically drop the thing that got blocked and go chase metrics on something else and don't tell anyone.


Here's a related article from one of the general partners at Coatue. The article covers a set of metrics, including DORA and SPACE, as well as productivity accelerators (and drags) that can help companies as they scale up.

https://www.notion.so/coatue-external/Scaling-Up-Engineering...


this topic is so close to my heart just yesterday i posted about my new project which measures these metrics(don't worry will not plug it here!) My take is that your measurment should give you some actionable insight and you should not measure absolute values. For example incident count, this looks like a useless metric to me because i can't tell what does high or low value mean, i also would need to look at team size to get anything useful out of it and that would not be very useful either. Same thing goes for lines of code, would 4 lines of lisp code mean i am less productive than 20 lines of c code?

Instead measure ratio of outgoing requests to incoming request if this ratio is greater than one or approaching one than you will not be able to execute with your current plan since you are not able to dispose issues in time. Your team is in danger zone and there is a risk of burnout of your team and non delivery of the project. This is also useful because you can go to the management team and tell them you need additional headcount and present this data and you are likely to get your request approved since it is backed by data.

Same thing goes for other metrics as well. Your metrics should be related to your goals they should clearly tell you if you can achieve your goal and how near/far you are from it.

Last but not the least don't forget about human touch since you are managing humans. Metrics should just be the starting point to dig deeper to solve the organizational issue they should not be used as final goal.


What does good team productivity look like? ROI?

What kinds of metrics are used for planning and operations? Productivity sinks?

I view engineering teams as a collection of highly driven professionals whos productivity levels are by default at 80% max daily.

Their productivity is impacted by bad planning, bad developer experience, bad documentation, meetings without agendas, 1:1s with no support or actionable feedback, individual recognition instead of team recognition, etc.

It's the engineering manager's job to remove the things that impact team productivity, measure the impact, report the issues affecting productivity, and plan to make them not happen again in the future. These are external to the team, things soaking up their time and making them stressed.

When you try to measure an individuals productivity of any role, you are now impacting productivity and should be removed to allow them to do their best work.

Use surveys to gather feedback from teams and allow teams to weed out, or help to reduce, internal productivity blockers.

And please communicate this to all members of a team, it's one of the primary internal to the team stress sources I've experienced;

role != authority

Roles are a collection of responsibilities, they do not grant additional authority.

From my experience what I've mentioned above would certainly make my day to day alot less stressful.


It’s a good article. Key points for me:

- measure teams not individuals

- stick to measurements related to planning and operations

- implement one measurement at a time

I didn’t see it in the article (yes I skimmed) but I’d add that it’s beneficial to develop measurements in conjunction with engineering teams.


My only contribution to this is that I strongly believe you should only be measured on a system you built from scratch - inheriting someone else’s system and then you being measured on it is unfair.


Why do you believe that? If the goal were to measure your individual programming skills in a vacuum, then your recommendation makes sense.

However, software engineering in a larger organization has a different goal -- it's team sport of building & operating software to serve the business purposes of that organization.

With this goal in mind, one's ability to build upon & maintain an inherited codebase is definitely relevant to success.


Another day, another management article with a lot of opinions and no research. I find https://en.wikipedia.org/wiki/McNamara_fallacy and https://en.wikipedia.org/wiki/Surrogation to be better starting points on the dangers of organizational measurement.

> The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily really isn't important. This is blindness. The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide.

> Managers tend to use measures as surrogates for strategy, acting as if measures were in fact the strategy when making optimization decisions. This appears to occur even if a measure-maximizing choice ultimately works against the strategy.

For those interested in history, there's a lot to learn about metric abuse from the Vietnam and Iraq Wars. For example:

> On October 27, 1967, The Wall Street Journal ran an un-bylined blurb from its Washington, D.C., bureau on the front page talking about a “victory index.”: U.S. strategists seek a “victory index” to measure progress in the Vietnam War. They want a single statistic reflecting enemy infiltration, casualties, recruiting, hamlets pacified, roads cleared. Top U.S. intelligence officials, in a recent secret huddle, couldn’t work out an index; they get orders to keep trying. (from: https://www.theatlantic.com/technology/archive/2017/10/the-c...)


I agree with most of the sentiments in the article and am always keen on learning new meaningful, i.e. not individual but high-level measures. The notion of "measuring something is better than nothing" is not helpful, I feel. You not measure what is just easy to measure, but what is actually imporant and gives real business insight. This, as the article states, is typically very hard to define and execute.

Also, I would widen the perspective of "CTO needs to provide data to CEO" / engineering effectivness a bit, as this is not the most important thing you need to know. Ultimately, as a company you wanna know if your business is going in the right direction or not. You want to spend more money on things & activities that add to the value of your product offering in the eyes of a customer so that the customer will pay for it and you want to spend less money on things customers will not pay for. But what are the value-adding activities and which are non-value-adding? When will a customer buy your product or service?

For me, this sets a certain order of importance of what things to measure/quantify to answer the following questions:

1. Deliver customer value. Are we building the right product?

2. Generate business value. Can we generate revenue?

3. High value product. Do we keep quality high and build the right next features?

4. Code quality. Are we building the product right so it is maintainable and exensible?

5. Team chemistry. Is the team aligned on the goal and healthy in their interactions and spirit?

6. Process efficiency. Is success repeatable?

===

(1) Answering that should be not too hard as you can measure any kind of customer feedback on your products, be it youtube likes, alpha tester feedback, support calls, surveys.

(2) Once you know you have a product customers want and like, you need to know is the customer facing part of your organization (marketing, sales, training & education and support) connecting to said customer, so that they can sell and actually generate revenue?

This is much harder to get data for. You can measure the number of licenses, lost and new customers, or the trends in the business volume of services, but what does that tell you about the ability of your organization to be able to sell a good product to customers? I am not sure here.

(3) is to reflect on the business success (ROI) on new features that you implement. Do you keep building a high value product? Are we effective in communicating the new features value proposition to our customers? There is soo much to measure here.

Feature level: Measure via BI the business impact of new features, how often are they used? Measure the resources needed to deliver a feature. Measure number of support cases / bugs reported per feature. Measure the estimated time vs the taken time for a feature.

Quality of the overall product: Measure the yearly mean number of support cases per week. Measure the yearly mean number of bug reports per week. Measure the yearly mean number of crash reports per week. Measure number of major field incidences per year.

Do we make the customer feel cared about after the sale? Measure lead time to resolve support calls. Measure customer ratings of support calls.

(4) Code quality like compiler/sonar warnings or test coverage are the easy measures, but might not give you insight. I prefer to look more highlevel again at the product quality which ties into (3), like How often do tickets come back from testing to development? Measure number of regressions reported by customers after a release. Measure number of (real) hotfixes needed after a release.

Answering the more fundamental questions of code quality like "how easy is it to add new features" is a bit more difficult. I have not found good measures yet.

For (5) totally agree with the article, you should never measure how many lines of code were produced, how many tickets closed or the like. The question of team happiness is most important for team work. Ther are good tools for that like OfficeVibe or Glint.

Measure the employee turnover. Measure the number of uninterrupted hours of work time / total time present at work per week. Measure the mean Office Vibe score. Many further answers could be drawn off of Office vibe: Are people at ease, having a good time and enjoying interactions with their peers? Is there no sense that single individuals try to succeed in spite of the efforts of those around them? Was the work a joint product? Was everybody proud of its quality? Do they take enjoyment in their work? Is there trust and mutual esteem among the peers?

For (6) you want to know: Are our processes such that success are repeatable?

On a company level: Measure number of botched releases / roll backs. Measure number of failed audits per year. Measure number of open CAPAs per year.

Per Team level: How long are compile/ CICD times? How long do code reviews lie around before taken up? How easy is on-boarding of new employees? etc




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: