Hacker News new | past | comments | ask | show | jobs | submit login

Sometimes measuring someone's effectiveness is as easy as "I sold that, here's the check, that's my effectiveness" but I've also noticed that measurement is often incredibly sloppy in business, and people rarely seem to get called on it. Making a serious effort to eliminate confounders is unusual. I think a lot of folks in non-programming jobs do, to a fairly high degree, just make shit up, pretending that they can measure the effect of various initiatives much better than they can, and for whatever reason this is rarely considered a problem or questioned.

Some of them surely realize they're just slinging barely-if-at-all-justified BS, but I also think lots and lots of people are just terrible at reasoning about that kind of thing and don't realize how meaningless the numbers they're generating are. They're trying, they just suck at it and no-one's bothered to tell them (or seems to care).

Possibly programmers are more sensitive to this than most, and are reluctant to put forward "bullshit" numbers that would, if they did, in fact be accepted as reality by the folks "above" them. Meanwhile someone down the hall's being promoted for numbers that are even more a work of fantasy than those, and may not even be intentionally deceiving anyone.

Note that the sales folks don't sweat over how much of their numbers can be attributed to the people making the thing they're selling. Those numbers are theirs, period. "Did I sell that or did Feature X put it over the top?" fretted no salesperson ever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: