Your heartbeat and breaths/week are also quite consistent. The issue here is that you have to have a reasonable theory of why variance in the metrics you're tracking will destroy value. And that means actual value, not whinging that standup goes too long. I like a short standup as much as anyone, but if that is a material driver of value destruction then your organisation is not ready for statistical quality control. Plus it probably goes over time consistently.
And once you start looking at the value, the lesson of software is that high variance activity is often the value add. It is the week(s) where someone implements pivot tables in Excel that creates probably billions of dollars in value over all of humanity. If that turns up as a statistical anomaly in the metrics because their line manager didn't bug them to fix bugs that week, that is a problem with the metrics not the programmer.
This isn't a Toyota production line (if you're interested in the history of this, that is no random example) where value is uniformly created with each car and optimising the daily process down to degree n creates value. This is software. The value isn't created in the same way and these tools are not powerful in driving value add decisions. Variance is untidy but by no means an enemy. It must be managed case by case in context.
There's a difference between input and output metrics to be considered too. Attempting to manage the output metrics directly rather than addressing the underlying causes is almost always the wrong thing to do.
> The issue here is that you have to have a reasonable theory of why variance in the metrics you're tracking will destroy value.
Establishing this theory requires a stable process! Without a stable process, you cannot make deliberate, systematic changes and observe how it affects outcomes. That sort of observation is key to theory-building.
I agree with most of what you say about the value in product development coming from innovation which is literally unaccounted-for variance. I just don't think that innovation happens in the length of standup meetings, which I think is better controlled statistically.
> Establishing this theory requires a stable process! Without a stable process, you cannot make deliberate, systematic changes and observe how it affects outcomes.
shrug Welcome to software engineering. Enjoy your stay.
If anyone has figured out how to make deliberate systemic changes that add value, they really need to publicise it because I'm not aware of many approaches that aren't absurdly basic (things like get release and get fast feedback).
There are lots of examples of companies like Google, Microsoft, AWS, etc that generate huge amounts of value from early on in their life cycle them then coast on that while experiments with processes never really move the needle all that much. Google has been search and ads for 20 years and none of their experiments with software quality since then have been all that impressive. It isn't even all that clear from a customer perspective that the quality and quantity of software is improving. If anything, they need to slow the programmers down, write less code and dedicate more resources to maintaining platforms and pushing them to succeed. SPC won't help with that though, because measuring "nothing happening" isn't what SPC is targeted at.
In fact, open source projects where there is usually no QAQC in sight are vast wealth and productivity fountains, we have people like Hipp over at SQLite who just tests everything to within an inch of its life. Good luck replicating that level of productivity with SPC. No systemic process has come close to matching the productivity one madman with a love of databases and stability. We have no idea how to make that repeatable; because the fundamental process that creates value isn't stable.
> I'm not aware of many approaches that aren't absurdly basic (things like get release and get fast feedback).
A bit much to call that “absurdly basic” when release cadence still varies massively even among successful companies. Patch Tuesday? Why not Patch Every Day!?
And once you start looking at the value, the lesson of software is that high variance activity is often the value add. It is the week(s) where someone implements pivot tables in Excel that creates probably billions of dollars in value over all of humanity. If that turns up as a statistical anomaly in the metrics because their line manager didn't bug them to fix bugs that week, that is a problem with the metrics not the programmer.
This isn't a Toyota production line (if you're interested in the history of this, that is no random example) where value is uniformly created with each car and optimising the daily process down to degree n creates value. This is software. The value isn't created in the same way and these tools are not powerful in driving value add decisions. Variance is untidy but by no means an enemy. It must be managed case by case in context.