Hacker News new | past | comments | ask | show | jobs | submit login

Anecdotally I’ve found the people I’ve worked with from EU countries tend to be well-educated and highly effective. If US work culture is about quantity, theirs is quality. I think that is a sustainable posture that will continue to bear fruit.



Do you think it might simply be that the subset of Europeans that interact with international clients tend to be well-educated and highly effective.

As a scientist, I interact with a number of Europeans. That happened less frequently when I was waiting tables.


True, there is always a lot of observation bias in our personal experiences.


Furthermore, there is a subset that works with the US specifically. The truth is that only the best get into this subset.

And I say this as a Western European.


> Anecdotally I’ve found the people I’ve worked with from EU countries tend to be well-educated and highly effective. If US work culture is about quantity, theirs is quality. I think that is a sustainable posture that will continue to bear fruit.

As an American, I have found this to be true as well. It is more about the corporate culture in America - which basically lazily attempts to quantify engineering. This leads to an amazing amount of chaos in American companies.

E.g. there is a new fad about counting # of commits as a measure of productivity in many American companies. Guess what is getting produced ever since engineers have discovered this.


If we're broadly generalizing three countries with a tremendous amount of engineers: American engineering culture optimizes for safety, Germany engineering optimizes for efficiency, Chinese engineering optimizes for cost.

While Chinese products are the most affordable, German products work the best and last the longest.

American products may not be cheap, nor work the best, but they're usually the safest by a longshot because we're so litigious.


I’d argue human health and safety is something that is prioritized more in Europe than in the US. Food is a great example.


America: features (certainly not safety)

Europe: quality

Chinese: cost


America: Performance Europe: Utility China: Cost


Which American companies do this?


I've seen this too, unfortunately. Commits, PRs, # of stories.

You get silliness from supposed "engineers" who will inflate the points for stories they work on and downplay points for stories others work on. It's pathetic.


I strongly believe you will get exactly the behavior you incentivize for. I used to apply this mostly to sales people -- if their bonus depends on new business, you will get a large number of soon-to-be-unhappy customers.

It works more generally though ... if management sets a metric, people will try to game it. On the flip side, you can't just have abstract 'quality' or 'customer satisfaction' because it is hard to know if you are really improving. I've never seen this solved once your scale gets past the small-group-of-people-with-a-shared-dream.


Why do people just roll over and accept such bullshit? When have workers become such pussies? A century ago, people risked their lives (and some died) striking to improve working conditions. Today, people just accept management's asinine policies.

Perhaps people are so checked out that they don't care if the policies make any sense at all, as long as they get their paychecks on time.


For me it’s a question of priorities. My salary makes up nearly 80% of my household income. Maintaining financial stability for my family is far, far more important than going on a crusade that might risk my employment.


If your employment and career path are tied to some ridiculous metric, and you (and everyone else) have to game it, that seems more like a systemic failure. It is pathetic that the company has set things up like that. Can’t blame the person for playing the dumb game if it puts food on the table.


It's not like engineers struggle, so playing such games against your colleagues is totally moral failure.


Of course I can.


When our performance reviews (read: raises and promotion) and, sometimes, whether or not we get RIF'd if the company needs to shed workers, depends on us making our metrics look good, you bet your ass we're gonna play the game, even though it's stupid.


I would say most startups are guilty of this, though they don't attempt to formalize the measurements.

At least from a few places I've been recently, "productivity" is viewed as the number of PRs you push through. QA of any sort is viewed as a waste of time, and it's far better to just push a PR today then take an extra day sanity check your work. On top of this user facing, demo-able, code is much more important than any back-end or infra work. This means the priority is rapidly releasing 90% of the way done products/features and moving on to the next thing.

Heck if there's a small bug in the code you just shipped (and of course releases are nightly because that's how you show that you're really moving) all the better since it means you get another easy PR when someone else discovers it.


That makes sense, given how startups (at least those VC funded) are defacto quasi-scams with a goal of creating an illusion of an unicorn, and then quickly flipping it in an IPO, and happily running away with the money. For such purpose, creating functionality that looks as if it works, but in reality is incomplete and buggy, may be just as good as the real thing. It's all about maintaing the charade until the key people can exit rich (preferably as quickly as possible).


This does align pretty well with my experience. Especially when I've seen multiple ideas that are clearly valuable to customers but not flashy enough for VCs get quickly deprioritized.


More than you want to know. As OP pointed out it seems to be a relatively new fad that has me deep sighing...


It is not a new fad; programmers have complained about this management practice since at least the 1970s.


Evidence from 1982: “-2000 Lines Of Code” (https://www.folklore.org/StoryView.py?project=Macintosh&stor...)



It’s an old fad become new again.


Facebook, for one.

Source: I've sat in on meetings where engineers were ranked, with # of commits being a key metric. That experience taught me the importance of making lots of small commits, more than any readability concerns ever could :)


I work outside tech, but can you please explain how number of X would be a good measure at all? What would be the justification for counting anything as a straight number as a measure in a creative field?


So the thing is that volume of output is still important even in creative fields; if you run a logo design business and A makes the logos that satisfies 10 companies and B makes the logos for 1 company that is a pretty direct business outcome. There's counter-examples of the one logo was your whale customer that needs perfection versus 10 small businesses where you can just put the company name in a few fonts, but the raw volume of output is extremely relevant aspect of employee performance.

With coding in practice the high performers who deliver business value are often (but not always) the same people writing a lot of code; low performers might only make a couple PRs per month while a high performer did a couple per day.

The problem is that is more of a proxy outcome: it's also easily possible for a low performer to shovel out several negative-value PRs per day and for someone who does only a couple small PRs per month to be doing something difficult that delivers enormous value (especially if those few changes are like updating an ML model or finding key performance optimization opportunities).


> if you run a logo design business and A makes the logos that satisfies 10 companies and B makes the logos for 1 company that is a pretty direct business outcome

There's a flaw with equating this to # of commits or lines of code. That flaw is - the amount of revenue increases with the number of logos produced but the amount of revenue does NOT increase with the # of commits or number of lines produced.


I intended the logo example because it has the same property: someone who makes 100 terrible logos is still pretty useless compared to someone who makes 10 good ones. But very often you have two people that are outputting 100 vs 10 and the quality is about the same.

The reality is that when I look at who was strong and who wasn't there's a pretty strong correlation between people writing more code and also their business impact being larger. Many smaller tasks do go "X lines of new code are needed to fix a problem", and the person who delivered on 10 tasks did write 10x lines compared to the person who did one task, with all tasks size, difficulty and business value held constant.

There's obviously tons of noise in this including incredibly far outliers of the guy who writes zero lines but whose knowledge is a lynchpin, and the guy who writes 5k lines of negative value garbage code. And when it becomes a known measure it surely does create very perverse incentives, etc, but companies end up using it as a proxy metric because there is literally no other objective measure of code quality or business impact for engineers available: the sole two things to measure SWEs on is lines of code and feelings.


> but companies end up using it as a proxy metric because there is literally no other objective measure of code quality or business impact for engineers available: the sole two things to measure SWEs on is lines of code and feelings.

And you have hit the nail on the head. The problem is that businesses have to rely on proxies because there is NO metric to quantify this creative work.

The best thing for businesses is to first define what the right outcomes are: they certainly aren't lines of code, stack ranking, number of commits etc.

But it could be features, bug fixes, investigation of systems etc - you know, actual work engineers do.

Once the business identifies this, all a business needs to do is hire skilled managers who understand that proxies are not outcomes. The proxies, rules built on those proxies, and firing people on these proxies is literally the stupidest thing to do.

Like, it is literally costing actual $$ on the business that has too many managers who are using simplist metrics. This cost is in terms of time. Cost = (number of days to hire + number of days to onboard + number of days wasted on BS metrics + number of days management spent on measuring and stack ranking on BS metrics + number of days to fire + number of days to backfill the role + number of days to get them back up to the skill up to the last engineer's level)

Find these skilled managers who understand this cost is not worth any BS metrics, hire these managers. That's it.


Hence this being a proxy measurement. But managers really want a way to measure "developer productivity", and the feeling is that proxy measurements are better than nothing, even though they aren't.

(Two simple examples: (1) how do you measure the productivity of a lead developer who spends much of his time mentoring, pairing, and helping the team deliver working code? (2) Is a developer who commits many small PR's "worth more" to the company than a developer who commits fewer, larger PR's?)


There's the clear connection to revenue potential between zero and one commits. I don't think commits and LoC are linear to someone's value to the codebase or org, but IME there's strong correlation between consistently low commits and lines of code output and ineffectiveness. Hard problems and low output happen situationally, generally not for quarters and years on end.


> There's the clear connection to revenue potential between zero and one commits.

But there is no correlation to revenue potential between 99 and 100 commits. And most companies are not stuck at the first commit.

> Hard problems and low output happen situationally, generally not for quarters and years on end.

Absolutely untrue. The hardest problems I have solved required me to build observability over 2 quarters, write RFCs and get buy-in from other engineers. None of these contribute to commits.


> The hardest problems I have solved required me to build observability

And you were able to do this without writing a line of code and committing it?


> The hardest problems I have solved required me to build observability

Yes. I personally did not commit a single line because I had to influence other engineers to do apply said commits in their own repositories.

I had 0 commits for 6 months.


> I work outside tech, but can you please explain how number of X would be a good measure at all? What would be the justification for counting anything as a straight number as a measure in a creative field?

You answered your own question. Management doesn't want to view engineering as a "creative field". In their ideal world, they want it to work like an assembly line. And for decades they've attempted to quantify it like an assembly line. But as of 2023, management has never been more wrong.

A good question is, "Why does management even desire quantification?" The answer to this is rather simple and unbelievable to most engineers toiling so hard.

The answer is - management (of all levels) is lazy and unskilled. They want to demonstrate to their bosses that they are running an efficient ship. They quantify it with # of commits or other such proxies. This gets them their own promotions. They don't care if the company survives or if their reports are working on the right things or if engineering challenges differ from one project to another. The most important thing for all managers is - their own promotion.


In general it is considered good to make lots of small commits, a commit is a bundle of changes you send to the shared project, you want to keep them small so they are easy to integrate and debug (if necessary).

But just counting them is a dumb way to judge productivity, and every programmer hates it.

The way to become a manager at a tech company is to be good at programming for a long time, an extremely specialized skill set that has nothing to do with management.


It's true that the way to become a software engineering manager starts at being a good engineer. The way to become a senior manager however is usually to be good at management.


It isn't, at all. It's laziness on the part of management.


Google does this.


Not really. It tries to measure "impact" and while direct measures like number of CLs can potentially go into that, this is definitely not the rule.

Then again, Google is best thought of as a collection of semi-independent companies loosely bound by culture. Individual managers have a lot of leeway in how they operate, and VPs and Directors have tons of control on how they run their organization, including measuring performance. There are good teams and there are not so good ones.


Just look for the ones hiring MBAs.


>new

Hah, sounds similar to the age-old fad of counting lines of code.


It might be interesting to see a study showing which countries (or economic areas, or industries, if you will) are more quality- vs quantity-focused. On the surface, putting in more hours would be an obvious one: Switzerland has a baseline max of 45h/w, while neighbouring France has 35. But Switzerland is a huge outlier in Europe, being, I believe, much more service-oriented than all of its neighbours because of limited useful space for space-hungry primary industries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: