Hacker News new | past | comments | ask | show | jobs | submit login

More importantly make sure you are measuring what really matters and not a proxy. I shut down our code coverage builds some years ago when I realized management was using that as a measure - the potential negatives from measuring that is far worse than any possible gain from an engineering improving anything. The measure is still useful, but until I can be convinced it won't be abused I won't measure it.



They were using it as a measure of what exactly? I'm having a hard time seeing how it can be interpreted in a way that's a net negative.


What happens when a team's code coverage drops below the mandated minimum? How do different teams' coverage numbers affect their value ranking against other teams? What's going to stop teams from gaming the number with techniques like https://www.pavelslepenkov.info/?p=110 ?

Lots of net-negative consequences can occur when management decides to measure things. Lots of net-positive too, otherwise they wouldn't ever do it, but developer productivity proxies are notoriously hard, I'd question any manager trying to make one with whether they've ever done or read about Deming's red bead experiment (http://maaw.info/DemingsRedbeads.htm)


That red bead experiment is an interesting illustration, thanks for sharing.


They wanted a measure of quality. We are an embedded system where we sometimes have to pay a tech to go our and update customer devices. Last time we had to do this the recall cost us something like 10 million dollars - that is just the price we paid the techs to drive to the customer and do the update, and doesn't count the engineering costs to create the fix. As such not having a recall event is important. (We do have customer installable updates, but for "reasons" some code cannot be updated by customers)

The negative is some people don't believe in through tests. They write a few unit tests for things they know are tricky. Then when their code coverage is low they get their coverage up by writing "tests" that take all the branches, but never actually assert anything. They know all the tricks to sneaking these bad tests in and the result is the metrics look good while hiding the fact that code is not covered.


Interesting. Sounds like tests were really needed in this case, but some people didn't want to write them. And the code coverage tools were just gamed. ie, an additional level of oversight would be needed there if you really wanted to ensure good test coverage.

I don't see how code coverage is making the situation worse though, it seems like it just wasn't enough in this case. I guess in that it resulted in useless tests by some people, it's a minor setback, but surely it encouraged others to write more proper tests.


Code coverage wasn't making it better. As one of the developers on the project, I already knew who was writing good code or not (but I couldn't do anything about it).


I've seen teams trying to get bonus points for patching thousands of files with errors. All this was done instead of innovating or writing a better module in a more succinct language that tackles the problem directly. I'd rather see employees finding ways to make sure those errors and infractions can't happen - often requiring different programming languages and tools - instead of just patching the code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: