Hacker News new | past | comments | ask | show | jobs | submit login

I've seen this phenomenon almost everywhere. A lot of times it kicks into high gear when "the powers that be" start making big, dumb changes (though often times they don't seem quite so bad in the moment). The best talent with the most important experience evaporates.

The thing that is the most pernicious about this effect is that it can be difficult to notice objectively. People who are in the trenches, are clueful, and know first hand the qualities of individuals who leave will know immediately that this is a bad development and that the organization has lost something of significant value. Even so, it can often be difficult to back up that gut instinct with factual evidence. It often takes years for it to become fully evident unless there's utter incompetence on display, and that evidence is usually in the form of it being more difficult and taking longer to execute on challenging projects, while achieving less success as a result. The thing is, in almost every dev shop on Earth and especially in the "higher level" ones the projects from year to year or "release cycle to release cycle" are often not 1:1 comparable, each has their own unique difficulties and complexities and even without losing any talent it would take a different amount of time to tackle those projects and the results and "level of success" would still be different. Without a window into a parallel universe it's hard to say how much you've fallen behind the nominal "unevaporated" track.

And in the eyes of management it's even more difficult because they are even more resistant to the notion that something they've done has had a negative impact on the company. If the end result is anything other than abject failure, if the lower quality "dead sea" org is still able to achieve a decent measure of success, still operate a profitable and successful enterprise, and so on, then they will think everything is fine. Even if the difference might have ultimately been orders of magnitude difference in levels of success, because that alternate timeline is unknowable.

And if you wonder why so very much software is surprisingly mediocre despite many hundreds, thousands, or tens of thousands of dev-years of work being put into making it better, well, maybe there's a reason for that.




Thank you for this comment. It helped summarize my thoughts.

Do you know if there's any pattern to the big, dumb changes? Are specific changes more likely to signal shipjumping time for top talent? Or is it a growing dysfunction?


When you take stock of everything a lot of the changes are obviously bad when you think about them from the "big picture" perspective, but sometimes it's hard to have that perspective, especially as changes tend to creep in incrementally. The biggest red flag I'd highlight is just bureaucracy in all its forms. I don't mean things like code reviews or fixed processes for doing things (although in some cases those can be bad). The biggest red flag is when there's a switch from certain important decisions being made by individuals to decisions being made by some rule based on certain metrics.

Over reliance on metrics is often a huge red flag. If your continued employment, your bonus, or your promotion prospects rest on how many bugs or tickets you close, how many test cases you automated, etc. then that's problematic. Because metrics can always be gamed. And if people are being judged relative to one another based on gameable metrics then only the people who game the metrics will get ahead and everyone else will be left by the wayside. To the detriment of employee morale and actually getting the right work done that needs to get done. If you tell someone that they have to close lots of bugs to show they are doing their job well then they are incentivized to figure out how to wiggle out of responsibility for a bug or "fix" bugs using the most expedient hack possible. Instead of taking the time to investigate a bug thoroughly as far as their expertise and context allows, doing a root cause analysis, stepping back and analyzing the meta-context that made the introduction of the bug even possible, and maybe doing the work to design a very thorough fix for those problems, beyond just the limited scope of making the one bug go away in the short term.

Good development work can often defy all metrics that attempt to measure it. It can look like an average of a net negative number of lines of code written. It can look like spending a month on a seemingly inconsequential bug (which turned out to have a very interesting cause that revealed a fundamental flaw in the design of the system which required several months to fix but led to a huge increase in overall reliability). It can look like days, weeks, or months spent not writing code or fixing bugs at all. Maybe that time is spent doing documentation, maybe it's spent doing design work, maybe it's spent doing research, maybe it's spent picking apart an existing system to learn exactly how it's put together, all of which might end up being hugely valuable. The thing is, there's no metric for things like "prevented a thousand bugs from being filed over the next year". And that gets to how difficult it is to objectively measure the work of coders.

The other major red flag is when you see people treated as just interchangeable resources. Does the company seem to value people's time? Does it treat people like human beings? Does it seem like the company/employee relationship is a cooperative (vs. exploitative) one? Is the company flexible about working from home, working non-standard hours, etc? Or does it treat knowledge work like a factory job where it cares about hours worked, "butts in seats", mandatory "crunch time", etc.


Thanks. I think I agree with everything you wrote- it's good to know I'm not insane for thinking like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: